When, in the seventies of the last century, Coppo and Dezani designed intersection types, their main motivation was to extend the typability power of simple types, adding them an intersection connective, enjoying associativity, commutativity and idempotency, so denoting set formation. In fact, the simple types system can be seen as a restriction of the intersection type system where all sets are singletons. Quite early intersection types turned out to be useful in characterizing qualitative properties of λ-calculus, like solvability and strong normalization, and in describing models of λ-calculus in various settings. A variant of intersection types, where the intersection is no more idempotent, has been recently used to explore quantitative properties of programming language, like the length of the normalisation procedure. It is natural to ask if there is a quantitative version of the simple type system, or more precisely a decidable restriction of non-idempotent intersection system with the same typability power of simple types. Since the lack of idempotency, now the intersection corresponds to multiset formation, so (extending the previous reasoning) the natural answer is to restrict the multiset formation to copies of the same type. But this answer is false, the so obtained system is decidable, but it has less typability power than simple types. We prove that the desired system is obtained by restricting the multiset formation to equivalent types, where the equivalence is an extension of the identity, modulo the cardinality of multisets.
Self-contained rules for classical and intuitionistic quantifiers
ABSTRACT. We introduce rules for the well-known quantifiers "forall" and "exists" and the less known quantifiers "no" (“there is no x for which φ(x)”) and "cex" (“there is some x for which φ(x) doesn’t hold”) that are self-contained. With self-contained, we mean that the rules are just about the quantifiers themselves and do not need other quantifiers or connectives to be expressed. As a by-product we have derivations of well-known classical tautologies that only involve forall and \/ that satisfy
the subformula property (and thus do not rely on negation and the law of excluded middle), e.g. for forall x. (P x \/ C) |- (forall x.P x) \/ C.
Our derivation rules for the quantifiers forall, exists, no, cex are derived from the truth table natural deduction approach (the method of deriving natural deduction rules for a connective c from the truth table t_c of c), that we have introduced before. The rules come in two flavors:
(1) a complete natural deduction calculus for classical predicate logic; (2) a complete natural deduction calculus for constructive logic. In both cases one can choose which of the quantifiers and propositional connectives one wants to have. The rules are self-contained, so adding a specific quantifier doesn’t have any prerequisites on adding other connectives first.
The main results are the following.
1. Self-contained classical deduction rules for forall, exists, no, cex
2. A Tarskian-style semantics for forall, exists, no, cex for which the classical rules are sound,
3. Classical derivations, e.g. of |- exists x.(exists y.P y) -> P x and of
forall x. (P x \/ C) |- (forall x.P x) \/ C that satisfy the subformula property,
4. Self-contained intuitionistic deduction rules for forall, exists, no, cex
5. A Kripke semantics for forall, exists, no, cex for which the intuitionistic rules are sound,
6. Proofs (intuitionistic derivations) showing that ~exists x.φ and no x.φ and forall x.~φ are equivalent, and that exists x.~φ |- cex x.φ and cex x.φ |- ~forall x.φ, but not the other way around. The formula cex x.φ expresses that “there is a counter-example to φ(x)”, and it is intuitionistically in between exists x.~φ and ~forall x.φ
Terms as Types: Calculations in the lambda-Calculus
ABSTRACT.
The calculus of Natural Calculation (NC), proposed by the second author, is an extension of Gentzen’s Natural Deduction for first-order logic with equality, including proper term rules permitting a natural representation of calculations (inside of proofs). In NC, the first order terms are first-class members of the calculus on a par with formulae: they can be assumed and discharged; the elimination of the equality rule (given in two polarities, to capture symmetry of equality) permits calculation steps in the calculus transforming term premises into term conclusions; the evaluation of calculations is the corresponding introduction rule for the equality symbol.
In this talk we develop the lambda-calculus corresponding to NC. There are proof terms P and calculation terms C, the latter typed with the final term of the calculation corresponding to C. The beta rule for equality is also given in two polarities, one of them requiring the "dualization" of the calculation term that constitutes the abstraction involved. The main result so far is a "polarized" translation into the simply-typed lambda-calculus, which has to anticipate at translation time the needed dualizations. The translation preserves typability and non-empty reduction sequences, and so the new lambda-calculus enjoys strong normalization.
Monadic Realizability for Intuitionistic Higher-Order Logic
ABSTRACT. The standard construction for realizability semantics of intuitionistic higher-order logic is based on partial combinatory algebras as an abstract computation model with a single computational effect, namely, non-termination.
Many computational effects can be modelled using monads, where programs are interpreted as morphisms in the corresponding Kleisli category.
To account for a more general notion of computational effects, we here construct effectful realizability models via evidenced frames, where the underlying computational model is defined in terms of an arbitrary monad.
Concretely, we generalize partial combinatory algebras to combinatory algebras over a monad and use monotonic post-modules to relate predicates to computations.
ABSTRACT. In which the author ponders about the possibility of designing a representation of scopes and variables such that common operations on syntax do not require the scope at run-time.
ABSTRACT. We first define a categorical framework capturing a computationally relevant notion of partiality. Then we show that generalized algebraic data types (GADTs) cannot be interpreted as functors in any instance of this framework.
ABSTRACT. We extend intersection types to a computational $\lambda$-calculus with algebraic operations \emph{à la} Plotkin and Power. We achieve this by considering monadic intersections---whereby computational effects appear not only in the operational semantics,
but also in the \emph{type system}. Since in the effectful setting termination is not anymore the only property of interest, we want to analyze the interactive behavior of typed programs with the environment. Indeed, our type system is able to characterize the natural notion of observation, both in the finite and in the infinitary setting, and for a wide class of effects, such as output, cost, pure and probabilistic nondeterminism, and combinations thereof. The main technical tool is a novel combination of syntactic techniques with abstract relational reasoning,
which allows us to lift all the required notions, e.g. of typability and logical relation, to the monadic setting.
The Rewster: The Coq Proof Assistant with Rewrite Rules
ABSTRACT. We present a work-in-progress extension of Coq which supports user-defined rewrite rules. While we mostly follow in the footsteps of the Agda implementation, we also have to face new issues due to the differences in the implementation and meta-theory of Coq and Agda. The most prominent one being the different treatment of universes as Coq supports cumulativity but no first-class universe levels. We will take advantage of this talk to expose our ideas on how to solve the different issues that arise when adding user-defined rewrite rules to a proof assistant by integrating rewrite rules in MetaCoq, building on previous work.
ABSTRACT. Subtype universes were initially introduced by Maclean and Luo as a way of formulating the notion of a 'type of subtypes' for a logical type theory equipped with coercive subtyping. This extended type theory has several nice metatheoretical properties such as strong normalisation, but this implementation excluded certain kinds of subtyping relations from being used, and the formulation was wrapped up in the complexities of the underlying type theory's universe hierarchy. We consider a simpler yet more expressive reformulation of subtype universes with the ability to extract coercions and discuss how this lifts the limits on what subtype relations can be used, and how the choice of subtype relations impacts the metatheory.
ABSTRACT. Nominal techniques provide a mathematically principled approach to dealing with names and variable binding in programming languages (amongst other applications). However, integrating these ideas in a practical and widespread toolchain has been slow, and we perceive a chicken-and-egg problem: there are no users for nominal techniques, because nobody has implemented them, and nobody implements them because there are no users. This is a pity, but it leaves a positive opportunity to set up a virtuous circle of broader understanding, adoption, and application of this beautiful technology.
This paper explores an attempt to make nominal techniques accessible as a library in the Agda proof assistant and programming language, which can be viewed as a port of the first author’s Haskell nom package, although that would be an injustice as its purpose is two-fold:
(1) provide a convenient library to use nominal techniques in Your Own Agda Formalisation
(2) study the formal meta-theory of nominal techniques in a rigorous and constructive way
A solution to Goal (1) must be ergonomic, meaning that a technical victory of implementing nominal ideas is not enough; we further require a moral victory that the overhead be acceptable for practical (and preferably larger-scale) systems. Apart from this being a literate Agda file, our results have been mechanised and are publicly accessible: https://omelkonian.github.io/nominal-agda/.
ABSTRACT. Distributed ledgers nowadays manage substantial monetary funds in the form of cryptocurrencies such as Bitcoin, Ethereum, and Cardano. For such ledgers to be safe, operations that add new entries must be cryptographically sound---but it is less clear how to reason effectively about such ever-growing linear data structures.
This paper views distributed ledgers as computer programs, that, when executed, transfer funds between various parties. As a result, familiar program logics, such as Hoare logic and separation logic, can be defined in this novel setting. Borrowing ideas from concurrent separation logic, this enables modular reasoning principles over arbitrary fragments of any ledger.
Our results have been mechanised in the Agda proof assistant and are publicly available: https://omelkonian.github.io/hoare-ledgers.
ABSTRACT. Oracles are crucial components that bring external data to smart contracts deployed
on blockchains. With the recent surge in popularity of decentralized finance (DeFi) applications, it is critical to provide assurances about the oracle implementations as these applications deal with high-value transactions and a small price discrepancy can lead to huge losses. Although there are many oracle implementations, there have not been many efforts to formally verify their behavior. We present a simple oracle implementation in Solidity and its formal model using the Coq interactive theorem prover. We also prove interesting trace-level properties that give us formal guarantees about the oracle’s behavior at a high level. Our work can be a stepping stone for future oracle implementations and provide developers with a framework for formally verifying their implementations.
ABSTRACT. This paper defines a simple basic model of smart contracts in Agda. In the previous work [1], we verified smart contracts in Bitcoin. The aim of this paper is the first step towards transferring this work to the Solidity-style [8] smart contracts of Ethereum, namely, to develop a model. This model is much more complex than that used for Bitcoin because contracts in Ethereum are object-oriented. We build a simple model which supports simple execution, calling of other contracts and functions and which refers to addresses and messages.
Rank-polymorphic arrays within dependently-typed langauges
ABSTRACT. Rank polymorphism is the ability of a function to operate on arrays of arbitrary ranks. The advantages of rank-polymorphism are twofold: very generic array combinators and the ability to specify advanced parallel algorithms such as scan or blocked matrix multiplication in a very natural combinatorial style. In this work we present an embedding of rank-polymorphic arrays within a dependently-typed language. Our embedding offers the generality of the specifications found in array languages. At the same time, we guarantee safe indexing and offer a way to reason about concurrency patterns within the given algorithm. The notion of array reshaping makes it possible to derive multiple parallel versions of the algorithm from a single specification. The overall structure of the proposed array framework is surprisingly similar to categories with families that is often used to encode type theory. Shapes are contexts, reshapes are substitutions, and arrays are well-scoped terms.