PPDP'19: PRINCIPLES AND PRACTICE OF DECLARATIVE PROGRAMMING
PROGRAM FOR MONDAY, OCTOBER 7TH
Days:
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 1: PPDP Invited Talk
09:00
Semantic Foundations for Gradual Typing
10:00-10:30Coffee Break
10:30-12:30 Session 2: Lambda Calculus and Type Theory
10:30
Exception Handling and Classical Logic

ABSTRACT. We present Lambda-try, an extension of the Lambda-calculus with named exception handling, via *throw n(N)*and *catch n(x) = M*, and present a basic notion of type assignment expressing recoverable exception handling and show that it is sound. We define an interpretation for Lambda-try to Parigot's Lambda-mu-calculus, and show that reduction (both lazy and call by value) is preserved by the interpretation. We will show that also types assignable in the basic system are preserved by the interpretation.

We will then add a notion of total failure through *panic* that escapes applicative contexts without being caught by a handler, and show that we can interpret this in Lambda-mu when adding *tp* as destination. We will argue that introducing handlers for *Panic* will break the relation with Lambda-mu.

We will conclude the paper by showing that it is possible to add handlers for *panic* to Lambda-try, via extending the language with a conditional construct that is typed in a non-traditional way. This will allow both recoverable exceptions and total failure, dealt with by handlers; we will show a non-standard soundness result for this system; this cannot be expressed in Lambda-mu or logic.

11:00
Normalization by Evaluation for Call-by-Push-Value and Polarized Lambda-Calculus

ABSTRACT. We observe that normalization by evaluation for simply-typed lambda-calculus with weak coproducts can be carried out in a weak bi-cartesian closed category of presheaves equipped with a monad that allows us to perform case distinction on neutral terms of sum type. The placement of the monad influences the normal forms we obtain: for instance, placing the monad on coproducts gives us eta-long beta-pi normal forms where pi refers to permutation of case distinctions out of elimination positions. We further observe that placing the monad on every coproduct is rather wasteful, and an optimal placement of the monad can be determined by considering polarized simple types inspired by focalization. Polarization classifies types into positive and negative, and it is sufficient to place the monad at the embedding of positive types into negative ones. We consider two calculi based on polarized types: pure call-by-push-value (CBPV) and polarized lambda-calculus, the natural deduction calculus corresponding to focalized sequent calculus. For these two calculi, we present algorithms for normalization by evaluation. We further discuss different implementations of the monad and their relation to existing normalization proofs for lambda-calculus with sums. Our developments have been partially formalized in the Agda proof assistant.

11:30
Crumbling Abstract Machines

ABSTRACT. Extending the lambda-calculus with a construct for sharing, such as let expressions, enables a special representation of terms: iterated applications are decomposed by introducing sharing points in between any two of them,reducing to the case where applications have only values as immediate subterms. This work studies how such a crumbled representation of terms impacts on the design and the efficiency of abstract machines for call-by-value evaluation. About the design, it removes the need for data structures encoding the evaluation context, such as the applicative stack and the dump, that get encoded in the environment. About efficiency, we show that there is no slowdown, clarifying in particular a point raised by Kennedy, about the potential inefficiency of such a representation. Moreover, we prove that everything smoothly scales up to the delicate case of open terms, needed to implement proof assistants. Along the way, we also point out that continuation-passing style transformations—which may be alternatives to our representation—do not scale up to the open case.

12:00
Sharing Equality is Linear

ABSTRACT. The lambda-calculus is a handy formalism to specify the evaluation of higher-order programs. It is not very handy, however, when one interprets the specification as an execution mechanism, because terms can grow exponentially with the number of beta-steps. This is why implementations of functional languages and proof assistants always rely on some form of sharing of subterms.

These frameworks however do not only evaluate lambda-terms, they also have to compare them for equality. In presence of sharing, one is actually interested in equality---or more precisely alpha-conversion---of the underlying *unshared* lambda-terms. The literature contains algorithms for such a *sharing equality*, that are polynomial in the sizes of the shared terms.

This paper improves the bounds in the literature by presenting the first *linear time* algorithm. As others before us, we are inspired by Paterson and Wegman's algorithm for first-order unification, itself based on representing terms with sharing as DAGs, and sharing equality as bisimulation of DAGs.

12:30-14:00Lunch Break
14:00-15:00 Session 3: Term Rewriting
14:00
Inductive Theorem Proving in Non-terminating Rewriting Systems and Its Application to Program Transformation

ABSTRACT. We present a framework for proving inductive theorems on first-order equational theories, using techniques of implicit induction developed in the field of term rewriting. In this framework, we make use of automated confluence provers, which have recently been developed intensively, as well as a novel condition of sufficient completeness, called local sufficient completeness. The condition is a key to automated proof of inductive theorems of term rewriting systems that include non-terminating functions. We also apply the technique to showing the correctness of program transformation that is realised as equivalent transformation of term rewriting systems.

14:30
Generic Encodings of Constructor Rewriting Systems

ABSTRACT. Rewriting is a formalism widely used in computer science and mathematical logic. The classical formalism has been extended, in the context of functional languages, with an order over the rules and, in the context of rewrite based languages, with the negation over patterns. We propose in this paper a concise and clear algorithm computing the difference over patterns which can be used to define generic encodings of constructor term rewriting systems with negation and order into classical term rewriting systems. As a direct consequence, established methods used for term rewriting systems can be applied to analyze properties of the extended systems. The approach can also be seen as a generic compiler which targets any language providing basic pattern matching primitives. The formalism provides also a new method for deciding if a set of patterns subsumes a given pattern and thus, for checking the presence of useless patterns or the completeness of a set of patterns.

15:00-15:30Coffee Break
15:30-17:00 Session 4: Programming Language Semantics
15:30
Property-Based Testing via Proof Reconstruction

ABSTRACT. Property-based testing (PBT) is a technique for validating code against an executable specification by automatically generating test-data. We present a proof-theoretical reconstruction of this style of testing for relational specifications and employ the Foundational Proof Certificate framework to describe test generators. We do this by presenting certain kinds of ``proof outlines'' that can be used to describe various common generation strategies in the PBT literature, ranging from random to exhaustive, including their combination. We also address the shrinking of counterexamples as a first step towards their explanation. Once generation is accomplished, the testing phase boils down to a standard logic programming search. After illustrating our techniques on simple, first-order (algebraic) data structures, we lift it to data structures containing bindings using $\lambda$-tree syntax. The lambdaProlog programming language is capable of performing both the generation and checking of tests. We validate this approach by tackling benchmarks in the metatheory of programming languages coming from related tools such as PLT-Redex.

16:00
Intrinsically-Typed Mechanized Semantics for Session Types

ABSTRACT. Session types have emerged as a powerful paradigm for structuring communication-based programs. They guarantee type soundness and session fidelity for concurrent programs with sophisticated communication protocols. As type soundness proofs for languages with session types are tedious and technically involved, it is rare to see mechanized soundness proofs for these systems.

We present an executable intrinsically typed small-step semantics for a realistic functional session type calculus. The calculus includes linearity, recursion, and recursive sessions with subtyping. Asynchronous communication is modeled with an encoding.

The semantics is implemented in Agda as an intrinsically typed, interruptible CEK machine. This implementation proves type preservation and a particular notion of progress by construction.

16:30
An Adequate While-Language for Hybrid Computation

ABSTRACT. Hybrid computation harbours discrete and continuous dynamics in the form of an entangled mixture, inherently present in various natural phenomena and in applications ranging from control theory to microbiology. The emergent behaviours bear signs of both computational and physical processes, and thus present difficulties not only in their analysis, but also in describing them adequately in a structural, well-founded way.

In order to tackle these issues and, more generally, to investigate hybridness as a dedicated computational phenomenon, we introduce a language for hybrid computation inspired by the fine-grain call-by-value paradigm. We equip it with operational and computationally adequate denotational semantics. The latter crucially relies on a hybrid monad supporting an (Elgot) iteration operator that we developed elsewhere. As an intermediate step, we introduce a more lightweight duration semantics furnished with analogous results and based on a new duration monad that we introduce as a lightweight counterpart to the hybrid monad.