View: session overviewtalk overviewside by side with other conferences

08:45-09:00 Session 22G: Welcome
Location: FH, Dissertantenraum E104
09:00-10:20 Session 23I
Location: FH, Dissertantenraum E104


Extensional Models of Typed Lambda-mu Calculus
SPEAKER: Koji Nakazawa

ABSTRACT. This paper shows that the stream models introduced by Nakazawa and Katsumata can be adapted to a typed setting. A variant of de'Liguoro's type system for an extension of the Lambda-mu calculus, called $\Lambda\mu_{cons}$, is sound and complete with respect to the stream models. It is proved that any individual stream model with whole function spaces and infinite domains for base types characterizes the extensional equality of $\Lambda\mu_{cons}$. This result corresponds to Friedman's theorem for the simply-typed lambda calculus and the full type hierarchies.

10:15-10:45Coffee Break
10:50-13:00 Session 27
Location: FH, Dissertantenraum E104
A Translation of Intersection and Union Types for the lambda-mu Calculus

ABSTRACT. We introduce an intersection and union type system for the lambda-mu calculus, which includes a restricted version of the traditional union-elimination rule. Strong normalisation of terms typable by the system is shown via a translation of intersection and union types into intersection and product types, relying on the result of strong normalisation of terms typable by van Bakel, Barbanera and de'Liguoro's system.

Dualized Type Theory
SPEAKER: unknown

ABSTRACT. We propose a new bi-intuitionistic type theory called Dualized Type Theory (DTT). It is a type theory with perfect intuitionistic duality, and corresponds to a single-sided polarized sequent calculus. We prove DTT strongly normalizing, and prove type preservation. DTT is based on a new propositional bi-intuitionistic logic called Dualized Intuitionistic Logic (DIL) that builds on Pinto and Uustalu's logic L. DIL is a simplification of L which removes several admissible inference rules from L while maintaining consistency and completeness. Furthermore, DIL is defined using a dualized syntax by labeling formulas and logical connectives with polarities thus reducing the number of inference rules needed to define the logic. We give a direct proof of consistency, but prove completeness by reduction to L.

A proof-theoretic view on scheduling in concurrency

ABSTRACT. This paper elaborates on a new approach of the question of the proof-theoretic study of concurrent interaction called "proofs as schedules". Observing that proof theory is well suited to the description of confluent systems while concurrency has non-determinism as a fundamental feature, we develop a correspondence where proofs provide what is needed to make concurrent systems confluent, namely scheduling. In our logical system, processes and schedulers appear explicitly as proofs in different fragments of the proof language and cut elimination between them does correspond to execution of a concurrent system. This separation of roles suggests new insights for the denotational semantics of processes and new methods for the translation of pi-calculi into prefix-less formalisms (like solos) as the operational counterpart of translations between proof systems.

A type system for Continuation Calculus
SPEAKER: unknown

ABSTRACT. Continuation Calculus (CC), introduced by Geron and Geuvers [2], is a simple foundational model for functional computation. It is closely related to lambda calculus and term rewriting, but it has no variable binding and no pattern matching. It is Turing complete and evaluation is deterministic. Notions like “call-by-value” and “call-by-name” computation are available by choosing appropriate function definitions: e.g. there is a call-by-value and a call-by-name addition function. In the present paper we extend CC with types, to be able to define data types in a canonical way, and functions over these data types, defined by iteration. The iteration scheme comes in two flavors: a call-by-value and a call-by-name iteration scheme. Data type definitions follow the so-called “Scott encoding” of data, as opposed to the more familiar “Church encoding”. We give examples of how to define functions over data types and how to reason with these functions.

Confluence for classical logic through the distinction between values and computations
SPEAKER: Ralph Matthes

ABSTRACT. We apply an idea originated in the theory of programming languages - monadic meta-language with a distinction between values and computations - in the design of a calculus of cut-elimination for classical logic.

The cut-elimination calculus we obtain comprehends the call-by-name and call-by-value fragments of Curien-Herbelin's lambda-bar-mu-mu-tilde-calculus without losing confluence, and is based on a distinction of "modes" in the proof expressions and "mode" annotations in types. Modes resemble colors and polarities, but are quite different: we give meaning to them in terms of a monadic meta-language where the distinction between values and computations is fully explored. This meta-language is a refinement of the classical monadic language previously introduced by the authors, and is also developed in the paper.

13:00-14:30Lunch Break
14:30-16:00 Session 31S
Location: FH, Dissertantenraum E104
Infinitary Classical Logic: Recursive Equations and Interactive Semantics

ABSTRACT. In this paper, we present an interactive semantics for derivations in an infinitary extension of classical logic. The formulas of our language are possibly infinitary trees labeled by propositional variables and logical connectives. We show that in our setting every recursive formula equation has a unique solution. As for derivations, we use an infinitary variant of Tait-calculus to derive sequents.

The interactive semantics for derivations that we introduce in this article is presented as a debate (interaction tree) between a test T (derivation candidate, Proponent) and an environment ¬S (negation of a sequent, Opponent). We show a completeness theorem for derivations, that we call interactive completeness theorem: the interaction between T (test) and ¬S (environment) does not produce errors (i.e., Proponent wins) just in case T comes from a syntactical derivation of S.

Separable Sequent Calculus for First-order Classical Logic (Work in Progress)

ABSTRACT. This paper presents Separable Sequent Calculus as an extension of Gentzen’s first-order classical sequent calculus with Herband-style structural rules (subformula contraction and weakening). Every proof can be transformed into separated form, in which all logical rules precede all structural rules, a result in the spirit of Gentzen’s midsequent theorem or sharpened Hauptsatz (where all propositional rules precede all quantifier rules), and Herbrand’s theorem.

Every separated proof has a direct graph-theoretic semantics as a *combinatorial proof*. Since a standard Gentzen proof is a special case of a separable proof, every Gentzen proof has a combinatorial proof semantics via its separated form. The semantics identifies sequent proofs which differ by rule transpositions (such as pushing a contraction below a weakening). Since combinatorial proofs can be verified in polynomial time (conjectured linear time), they constitute a syntax in their own right, arguably a canonical abstraction of sequent calculus modulo inessential rule transposition.

Cut-Elimination in Schematic Proofs and Herbrand Sequents
SPEAKER: unknown

ABSTRACT. In a recent paper [2], a procedure was developed extending the first-order CERES method [1] so that it can handle cut-elimination in a schematic first-order calculus. The goal of this work was to circumvent the problems reductive cut elimination methods face when the LK calculus is extended by an induction rule. The schematic calculus can be considered a replacement for certain types of induction. In this work, we used the schematic CERES method to analyse a proof formalized in a schematic sequent calculus. The statement being proved is a simple mathematical statement about total functions with a finite range. The goal of proof analysis using the first-order CERES method [1] has been to produce an ACNF (Atomic Cut Normal Form) as the final output of cut-elimination. However, due to the complexity of the schematic method, the value and usefulness of an ACNF quickly vanishes; it is not easily parsable by humans. The Herbrand sequent corresponding to an ACNF turned out to be a valuable, compact and informative structure, which may be considered the essence of a cut-free proof in first-order logic [3]. We provide a method for extracting a schematic Herbrand sequent from the formalized proof and hint at how ,in future work, we can generalize the procedure to handle a class of proofs by a suitable schematic language and calculus, and not just for a particular instance.

Stratified Nested Linear Logic

ABSTRACT. Implicit computational complexity (ICC) is the field aiming to characterize complexity classes by syntactic restrictions on models of computation. For example, Light Linear Logic (LLL) is a linear logic subsystem characterizing Ptime. Finding more expressive characterizations of $Ptime$ is a big challenge in ICC. This led, for instance, to L^4 and MS, which are orthogonal extensions of LLL.

Here, we define a linear logic subsystem: Stratified Nested Linear Logic (SNLL) which extends LLL, L^4 and MS. This system is not just the union of those systems as it can type lambda-terms which are typed by none of the above systems.

16:00-16:30Coffee Break
16:30-18:00 Session 34U
Location: FH, Dissertantenraum E104
A sheaf model of the algebraic closure
SPEAKER: unknown

ABSTRACT. In constructive algebra one cannot in general decide the irreducibility of a polynomial over a field K. This poses some problems to showing the existence of the algebraic closure of K. We give a possible constructive interpretation of the existence of the algebraic closure of a field in characteristic 0 by building, in a constructive metatheory, a suitable site model where there is such an algebraic closure. One can then extract computational content from this model. We give examples of computation based on this model.

A "Game Semantical" Intuitionistic Realizability Validating Markov's Principle
SPEAKER: unknown

ABSTRACT. We propose a very simple modification of Kreisel's modified realizability in order to computationally realize Markov's Principle in the context of Heyting Arithmetic in all finite types. Intuitively, realizers correspond to arbitrary strategies in Hintikka-Tarski games, while in Kreisel's realizability they can only represent winning strategies. Our definition, however, does not employ directly game semantical concepts and remains in the style of functional interpretations. As term calculus, we employ a purely functional language, which is Godel's System T enriched with some syntactic sugar. Our work is different from the Dialectica in two ways: first, the realizability notion is in the style of Kreisel's and thus less technically involved; second, the term assignment to proofs follows the standard Curry-Howard correspondence for Heyting Arithmetic and thus is considerably simpler.

Note: talk extracted from the paper


to appear in the Post-Proceedings of TYPES 2013

Relative Computability and Uniform Continuity of Non-Extensional (aka Multivalued) 'Functions'
SPEAKER: Arno Pauly

ABSTRACT. In this short submission we report on (and advertise :-) our recent work that appeared in Journal of Logic & Analysis vol.5:7 (2013) 1–39:

A computable real function is necessarily continuous; and this remains true for computations relative to any oracle. Conversely, by the Weierstrass Approximation Theorem, every continuous f:[0;1]->R is computable relative to some oracle. In their search for a similar topological characterization of relatively computable MULTIvalued functions f:[0;1]=>R (also known as multi-functions or relations), Brattka and Hertling (1994) have considered two notions: weak continuity (which is weaker than relative computability) and strong continuity (which is stronger than relative computability). Observing that UNIFORM continuity plays a crucial role in the Weierstrass Theorem, we propose and compare several notions of uniform continuity for relations. Here, due to the additional quantification over values y in f(x), new ways arise of (linearly) ordering quantifiers -- yet none turns out as satisfactory. We are thus led to a concept of uniform continuity based on the Henkin quantifier; and prove it necessary for relative computability of compact real relations. In fact iterating this condition yields a strict hierarchy of notions each necessary -- and the omega-th level also sufficient -- for relative computability. A refined, quantitative analysis exhibits a similar topological characterization of relative polynomial-time computability.

Negative Translations and Heyting Algebra Expansions
SPEAKER: Tadeusz Litak

ABSTRACT. I discuss the behaviour of generalizations of standard negative translations - Kolmogorov, Kuroda, Goedel-Gentzen, Glivenko - in extensions of intuitionistic logics with additional propositional connectives and additional axioms. The main focus will be on extensions with a single, unary, normal "box-like" modality. The question, in other words, is: for which axioms the negative translation embeds faithfully the resulting classical modal logic into its intuitionistic counterpart? As it turns out, even the Kolmogorov translation can go wrong with rather natural modal axioms. Nevertheless, there are several sufficient syntactic and semantic criteria which ensure most of these translation play well with most standard normal modal logics.