TYPES 2024: TYPES 2024
PROGRAM FOR MONDAY, JUNE 10TH
Days:
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 2: Invited Talk
Location: Aud 0
09:00
Cocon: A Type-Theoretic Framework for Meta-Programming

ABSTRACT. Meta-programming is the art of writing programs that produce or manipulate other programs. This allows programmers to automate error-prone or repetitive tasks, and exploit domainspecific knowledge to customize the generated code. Hence, meta-programming is widely used in a range of technologies: from cryptographic message authentication in secure network protocols to supporting reflection in proof environments such as Lean.

Unfortunately, writing safe meta-programs remains very challenging and sometimes frustrating, as traditionally errors in the generated code are only detected when running it, but not at the time when code is generated. To make it easier to write and maintain meta-programs, tools that allow us to detect errors during code generation – instead of when running the generated code – are essential.

This talk revisits Cocon, a framework for certified meta-programming. Cocon is a Martin-Löf dependent type theory for defining logics and proofs that allows us to represent domain-specific languages (DSL) within the logical framework LF and in addition write recursive programs and proofs about those DSLs [3] using pattern matching. It is a two-level type theory where Martin-Löf type theory sits on top of the logical framework LF and supports a recursor over (contextual) LF objects. As a consequence, we can embed into LF STLC or System F, etc. and then write programs about those encodings using Cocon itself. This means Cocon can serve as a target for compiling meta-programming systems –from compiling meta-programming with STLC to System F. Moreover, Cocon supports writing an evaluator for each of these sub-languages. This also allows us to reflect back our encoded sub-language and evaluate their encodings using Cocon’s evaluation strategy.

I will conclude with highlighting more recent research directions and challenges (see [2] and [1]) that build on the core ideas of Cocon and aim to support meta-programming and intensional code analysis directly in System F and Martin-Löf type theory.

 

References

[1] Jason Z. S. Hu and Brigitte Pientka. Layered modal type theory - where meta-programming meets intensional analysis. In ESOP (1), volume 14576 of Lecture Notes in Computer Science, pages 52–82. Springer, 2024.

[2] Junyoung Jang, Samuel Gélineau, Stefan Monnier, and Brigitte Pientka. Moebius: Metaprogramming using contextual types – the stage where system f can pattern match on itself. Proc. ACM Program. Lang. (PACMPL), (POPL), 2022.

[3] Brigitte Pientka, Andreas Abel, Francisco Ferreira, David Thibodeau, and Rebecca Zucchini. A type theory for defining logics and proofs. In 34th IEEE/ ACM Symposium on Logic in Computer Science (LICS’19), pages 1–13. IEEE Computer Society, 2019.

10:00-10:40 Session 3: Polarized Logic
Location: Aud 0
10:00
Faithful interpretations of LJT and LJQ into polarized logic
PRESENTER: Luís Pinto

ABSTRACT. LJT and LJQ are well-known focused sequent calculi with a long history in proof theory and a connection with call-by-name and call-by-value computation, respectively. We are revisiting and problematizing a faithful interpretation of LJT into the focused sequent calculus LJP for polarized logic. Faithfulness allows for the inheritance of a solution space representation developed for LJP and the reuse of known results concerning decision problems related to inhabitation in LJP.

Moreover, we are describing work in progress on a faithful interpretation of LJQ into LJP. This result was hinted before in the focusing literature, but we are aiming at a full treatment (the proofs for the fragment with implication and disjunction are settled), technically relying on the use of proof terms. The use of proof terms brings the perspective of inhabitation of simple types by the normal forms of a call-by-value language.

10:20
Yet another formal theory of probabilities (with an application to random sampling)

ABSTRACT. There are already several formalizations of probability theory in the Coq proof assistant with applications to mathematics, information theory, and programming languages. They have been developed independently, do not cover the same ground, and a substantial effort is required to make them inter-operate. In this presentation, we report about an on-going effort in Coq to port and generalize a library about finite probabilities to a more generic formalization of real analysis called MathComp-Analysis. This gives us an opportunity to generalize results about convexity and probability and to enrich the library of probability inequalities. We explain our process of formalization and apply the resulting library to an original formalization of a random sampling theorem.

10:40-11:10Coffee Break
11:10-12:30 Session 4: Constructive Mathematics
Location: Aud 0
11:10
A zoo of continuity properties in constructive type theory

ABSTRACT. We analyse continuity properties formalised in constructive dependent type theory, from the perspective of constructive reverse mathematics. All our results are mechanised in the Coq proof assistant.

11:30
The Blurred Drinker Paradox and Blurred Choice Axioms for the Downward Löwenheim-Skolem Theorem

ABSTRACT. In the setting of constructive reverse mathematics, we analyse the downward Löwenheim-Skolem (DLS) theorem of first-order logic, stating that every infinite model has a countable elementary submodel. Refining the well-known equivalence of the DLS theorem to the axiom of dependent choice (DC) over the classically omnipresent axiom of countable choice (CC) and law of excluded middle (LEM), our approach allows for several finer logical decompositions: Over CC, the DLS theorem is equivalent to the conjunction of DC with a newly identified principle weaker than LEM that we call the blurred drinker paradox (BDP), and without CC, the DLS theorem is equivalent to the conjunction of BDP with similarly blurred weakenings of DC and CC. Orthogonal to their connection with the DLS theorem, we also study BDP and the blurred choice axioms in their own right, for instance by showing that BDP is LEM without some contribution of Markov's principle and that blurred DC is DC without some contribution of CC. All results are stated in the Calculus of Inductive Constructions and an accompanying Coq mechanisation is available on Github.

11:50
Limited Principles of Omniscience in Constructive Type Theory

ABSTRACT. The Limited Principle of Omniscience (LPO), an instance of the Law of Excluded Middle (LEM) which states that Σ01 propositions P (i.e. existential quantification over a decidable predicate on N) are classical (i.e. P ∨ ¬P holds), is often enough to prove theorems of classical mathematics. LPO implies Markov’s Principle (MP), stating that Σ01 propositions are stable under double negation. Several variants of MP, varying in the definition of decidability, have been introduced and used in the literature, and we have shown in previous work that two of these variants can be separated. We further show here how to separate three variants (stated over (1) a decidable predicate; (2) a Boolean-valued function; and (3) a primitive recursive Boolean-valued function), and extend those results to LPO. Furthermore, we for the first time give these separations (formalized in Agda) for Martin-Löf Type Theory (MLTT), which is at the heart of many dependent type theories.

12:10
Post’s Problem and the Priority Method in CIC

ABSTRACT. We describe our formalisation of a solution to Post's Problem using the priority method in synthetic computability theory. Compared to a usual, analytic approach employing explicit models of computation, a synthetic approach axiomatically considers all functions $\Nat \to \Nat$ to be computable. We work in the Calculus of Inductive Constructions and mechanise all proofs in the Coq proof assistant.

12:30-14:00Lunch Break
14:00-15:20 Session 5: Proof Assistant Implementation
Location: Aud 0
14:00
Type-Based Termination Checking in Agda

ABSTRACT. We present a description of a type-based termination checker for the dependently-typed language Agda. The checker is based on the theory of sized types, i.e., types indexed by an abstract well-ordered type of sizes. It uses a variant of strongly-normalizing higher-order polymorphic lambda calculus with pattern and copattern matching as the semantical foundation. The checker is implemented for Agda, where it is used to certify termination of recursive functions and productivity of corecursive functions.

14:20
Size-preserving dependent elimination

ABSTRACT. In 2013, the guard condition used in Coq for checking well-foundedness of fixpoints over inductive types was proven incompatible with propositional extensionality. To preserve the compatibility with propositional extensionality, the guard algorithm was restricted but this excluded at the same time some interesting well-foundedness fixpoints such as those obtained by compiling dependent pattern-matching using "small inversion". We propose a less restrictive guard condition that is satisfactory for supporting small-inversion-based compilation of dependent pattern-matching while still being conjectured as compatible with propositional extensionality. We also conjecture this new kind of restriction to support a form of reduction rules for proof-irrelevant propositional equality that does break strong normalisation in the presence of an impredicate universe.

14:40
How much do System T recursors lift to dependent types?

ABSTRACT. We investigate how to give a foundational role to as-variables in pattern-matching.

15:00
A generic translation from case trees to eliminators

ABSTRACT. Dependently-typed languages allow one to guarantee correctness of a program by providing formal proofs. The type checkers of such languages elaborate the user-friendly high-level surface language to a small and fully explicit core language. A lot of trust is put into this elaboration process, even though it is rarely verified. One such elaboration is elaborating dependent pattern-matching to the low-level construction of eliminators. This elaboration is done in two steps. First, the function defined by dependent pattern-matching is translated into a case tree, which can then be further translated to eliminators. We present a generic, correct-by-construction translation of this second step in Agda, without the use of metaprogramming and unsafe transformations, by giving a type-safe, correct-by construction, generic definition of case trees and an evaluation function that, given an instantiation of such a case tree and an instantiation of the telescope of function arguments, evaluates the output term of the function using only eliminators.

15:20-15:50Coffee Break
15:50-16:50 Session 6: Blockchain and Smart Contracts
Location: Aud 0
15:50
Mechanizing BFT consensus protocols in Agda

ABSTRACT. Consensus protocols have been around for a long time but there has been a surge of interest in the last decade, mostly motivated by cryptocurrency and blockchain applications, where all participants need to decide and agree on which block should be next in the chain.

Consensus protocols can be permissioned or permissionless. Nakamoto style consensus protocols are permissionless (all participants can be part of the decision procedure) while classical protocols like BFT are permissioned (a few designated ones make the decision). With the advent of proof of stake blockchains, permissioned protocols can be adapted to work in a blockchain setting: a commission can be formed based on the stake of all participants. The commission makes all decisions until a new commission is designated.

BFT protocols follow a pattern of propose and vote, where a leader proposes a block and other nodes vote for it. when a block gets enough votes it gets notarized. A notarized chain (i.e. a chain of notarized blocks) is considered finalized when certain protocol-dependent conditions hold. Once a chain is final the protocol guarantees that all participants will agree on it.

There are several formalisations of different consensus protocols in which parts or all of the protocol and their corresponding safety and liveness properties are proved correct.

In this work, we propose an alternative approach, where the formalization is divided into layers. At is most abstract layer we only model the minimum information we need to prove that a block with a certain property exist on a (finalized) chain. The motivation for this is that we want to be able to construct and verify zero-knowledge (ZK) proofs that a given chain is notarized/final. This means that the details of how to get to a notarized/final chain are not important for this purpose.

Of course in order to get a final chain on will need a concrete protocol, which we formalize in the next layer, which corresponds to a mechanization of the paper introducing the protocol informally. Still, one might want to explore more details than the ``academic'' version of the paper, e.g., to analyze and optimize performance or include the actual networking, which should again be studied in an even lower layer of abstraction.

We conduct everything mechanically using the Agda proof assistant and simultaneously develop a compilation backend to Rust, so that our formal artifact can also be utilized for conformance testing against an actual blockchain in production.

16:10
Termination-checked Solidity-style smart contracts in Agda in the presence of Turing completeness

ABSTRACT. This paper is a further step in extending the verification of Bitcoin Script using weakest precondition semantics in our articles [6, 1, 5] to Solidity-style smart contracts. The first step is to develop a model, which is substantially more complex than that of Bitcoin Script because smart contracts in Solidity are object-oriented. This paper extends the simple model of Solidity-style smart contracts in Agda in our article [2] to a complex model. The main addition in the complex model is that it deals with the termination problem by adding a cost per instruction (gas cost) as implemented in Ethereum, therefore execution of smart contracts passes the termination checker of Agda.

16:30
A formal security analysis of Blockchain voting

ABSTRACT. Voting and blockchains are intimately connected. Voting is used in blockchains for consensus, governance, and decentralized organizations (DAOs). Conversely, blockchains can serve as a bulletin board for smaller elections. For these applications, the stakes are high, both financial and societal. Moreover, for blockchains, the adversarial model is complex: the adversary has complete knowledge of the system and full access to the network.

Here we focus on the use of blockchains for smaller elections; so-called boardroom voting. We consider one such protocol: the Open Vote Network (OVN), which provides private voting on a blockchain. Such private voting could also be applied to a DAO, improving the state of the art where voting for DAOs is often pseudonymous instead of private.

The OVN protocol has previously been implemented as a smart contract on Ethereum. It comes with an informal security argument. We propose a more complete analysis, which we formalize in the Coq proof assistant. We thus provide, for the first time, a complete analysis of a cryptographic smart contract, which is both provably functionally correct and cryptographically secure. We are extending this to a more realistic implementation.