ABSTRACT. Proof assistants like Coq, Lean, and Isabelle/HOL have shown great promise for helping people write formal proofs about both programs and mathematics. Proof automation aims to make that easier, whether through full automation or through interactive user assistance. Traditional proof automation based on symbolic AI, logic, and programming languages theory is highly predictable and understandable, but lacks flexibility and is challenging for non-experts to extend. In contrast, neural proof automation based on machine learning and natural language processing can be unpredictable and confusing, but shines in its flexibility and extensibility for non-experts. This talk will describe how these different kinds of proof automation come together to make writing formal proofs easier, and what this means for the future of formal proof.
Internal relational parametricity, without an interval
ABSTRACT. In this talk we describe a type theory with internal relational parametricity, based on previous work by Altenkirch, Kaposi, Shulman and Chamoun (POPL 2024).
ABSTRACT. It has been long-recognized in the theory of functional programming languages that the
concept of parametric polymorphism has a particular connection to the category-theoretic notion
of naturality. For instance, Wadler [Wad89] famously noted that any System F function r of
type
∀α. List(α) → List(α),
that is, a function List(α) → List(α) which is “polymorphic in α”, must automatically be a
natural transformation from the List functor to itself, i.e.
rY ◦ (map f ) = (map f ) ◦ rX
for any function f : X → Y . However, this tight connection between parametricity and natu-
rality breaks down for types with more complex variance; for instance, a polymorphic function
g : ∀α. (α → α) → (α → α) is not even the right kind of thing to be a natural transformation
from the Hom functor to itself: g is only indexed by one type variable α, whereas Hom is a
difunctor, a functor taking in one covariant and one contravariant argument. The notion of a
dinatural transformation [ML78, Chapter IX] does not provide a general solution either, since
dinaturals do not compose. Instead, the analogy to naturality is left there; the approach taken
by Reynolds [Rey83], and consequently by most of the literature on parametric polymorphism,
is to state parametricity in terms of relations instead of functions. Indeed, [HRR14] goes so
far as to suggest that Reynolds’s solution—“to generalize homomorphisms from functions to
relations”—ought to be carried out across mathematics more broadly.
The present author sought to push back on this suggestion, on account of the more cum-
bersome nature of relational calculi, as well as a desire not to reinvent category theory in a
relational mould (e.g. the theory of allegories [FS90]). However, to meet this challenge, de-
fenders of function-based mathematics would need to formulate parametricity in a functional,
category-theoretic way, i.e. extend the notion of naturality to mixed-variant functors so as to
complete the connection above. In the preprint Paranatural Category Theory [Neu23], I sought
to develop the category theory surrounding the most promising candidate—strong dinatural
transformations [Mul92]—towards such a possible solution. The purpose of this talk is discuss
the current status of this theory, and the difficulties that have emerged since the first draft of
the preprint.
One area of focus will be the failure of the di-Yoneda Lemma—an analogue of the Yoneda
Lemma, for difunctors and strong dinatural transformations—as originally stated. If the cat-
egory of difunctors and strong dinatural transformations had such a Yoneda Lemma, then it
would be possible to define exponentials in this category analogously to how they’re defined in
presheaf categories, thereby (potentially) bypassing some of the known issues with using strong
dinatural transformations as a formalism for parametricity. I speculate on whether some re-
stricted class of difunctors can be identified which do have such a Yoneda Lemma, and whether
this class includes the examples important in practice.
I’ll also cover some progress towards building a strong (co)end calculus. The original preprint
included a development of this calculus, which generalizes the work of Awodey, et al. [AFS18]
encoding inductive types, to cover coinductive and existential types; it also included the Yoneda-
like Lemmas due to Uustalu [Uus10], connecting strong dinaturality to initial algebras/terminal
coalgebras. However, the original preprint left much of the connection existing work on the
(co)end calculus unexplored and several questions unanswered, which I hope to address in this
talk.
Strict syntax of type theory via alpha-normalisation
ABSTRACT. The main difficulty of using the well-typed quotiented syntax of type
theory in formalisations is the so-called transport hell: the
equality (app t u)[γ] = app (t[γ]) (u[γ]) does not make sense because
t[γ] is not of a function type, but a substituted type (A⇒B)[γ], and
we need another equation on types (namely (A⇒B)[γ] = (A[γ])⇒(B[γ])) to
make it well typed. Hence a transport will appear on the subterm
(t[γ]), and it makes it difficult to use this equation: we need to
manually move the transport in and out of terms. We propose a
mitigation of this problem for the syntax of type theory via a
well-known technique: defining the substitution _[_] operation
recursively, rather than being part of the syntax. It is common to do
this in extrinsic syntaxes, but it is not obvious for intrinsic
syntaxes where substitution is mentioned by some of the typing and
computation rules. We propose the following methodology: (i) define
the usual intrinsic syntax with explicit substitutions; (ii) prove
alpha-normalisation for this syntax, explicit substitutions do not
appear in alpha-normal forms; (iii) define a new model using
alpha-normal forms where substitution laws (such as the one for ⇒ and
app above) are definitional. Now there is much less transport hell triggered by
the syntax: substitutions compute away, and we have a simpler
induction principle. We used the above technique to formalise simple type
theory and proved canonicity, and a version for dependent types is
make in progress.
ABSTRACT. Categories with Families (CwF) provide a natural semantics for Type
Theory so that the intrinsic syntax of Type Theory directly gives rise
to an initial CwF represented as a Quotient Inductive-Inductive Type I it was noted that in a Type Theory
without Uniqueness of Equality (UIP) like Homotopy Type Theory (HoTT)
we cannot interpret the syntax in the standard model,
i.e. Sets. Instead we used a inductive-recursive universe which is not
univalent to overcome this issue. To deal with this issue we introduce the notion of a coherent CwF or
1-CwF while the set-truncated CwFs we call set CwFs or 0-CwF. In a
coherent CwF we truncate types to be groupoids (1-types).
Type theory in type theory using single substitutions
ABSTRACT. The well-typed quotiented (intrinsic) syntax of type theory can be
defined as the initial category with families (CwF) with extra
structure corresponding to the type formers. The initial model is
implemented in type theory by a quotient inductive-inductive type. In
this talk we describe a simpler alternative to CwFs using single
substitutions and single weakenings. These are not composable, so the
syntax is not based on a category (the families are still there). In
addition to computation and uniqueness rules, the equations we have
are all naturality equations (also called substitution laws such as
"(app t u)[γ] = app (t[γ]) (u[γ])"), there are five equations
describing interaction of variables and single
substitutions/weakenings, and four extra equations for types. The
rules for CwF are provable from our rules by induction on the single
substitution syntax. We have more models than CwFs, the situation is
similar to the correspondence between combinatory logic and lambda
calculus. Our single substitution calculus is a simpler variant of
Voevodsky's B-systems (while his C-systems correspond to CwFs). We
formalised a type theory with a universe and Pi types based on single
substitutions in Agda, and derived the rules for CwF using the
technique of alpha-normalisation.
ABSTRACT. In this talk, we wish to introduce a dependent type theory with inductive-coinductive types that interprets equality as a coinductive type and, dually, features apartness as an inductive relation,
ABSTRACT. We present a novel analysis of the Loeb induction principle from guarded recursion. Taking advantage of recent work in modal type theory and cubical type theory, we derive Loeb induction from a simpler and more conceptual set of primitives. We combine these insights to produce Gatsby, the first guarded type theory capturing the rich modal structure of the topos of trees alongside Loeb induction without immediately precluding canonicity or normalization.
ABSTRACT. Our work is motivated by a recent construction of a model of univalent type theory by Sattler. This construction is done in two steps. One instance of the first step, is a cubical model based on presheaves over finite, non-empty posets (in contrast with simplicial sets based on presheaves over finite non-empty linear posets). In a second step, a new model of type theory is obtained by localizing along a lex modality. This second model validates Whitehead’s principle, and dependent choice. Furthermore, it is expected to give a good constructive notion of homotopy types of spaces.
We design a cubical type theory based on the first model, which can be interpreted in the second model as well. This type theory is an extension of both Cartesian cubical type theory, and the Dedekind version of CCHM. We have written a corresponding type checker and evaluator.
ABSTRACT. We present the on-going work to extend Quantitative Type Theory (QTT) with inductive families whose constructors are user-annotated. We give the general scheme for defining lists with quantities, which we believe can be extended to arbitrary inductive families, subsuming instances of datatypes like dependent pairs, unit, Boolean, natural numbers, and lists found in existing literature.
Looking Back: A Probabilistic Inverse Perspective on Test Generation
ABSTRACT. Software validation is hard, among others things because of the sheer size of the input space. Reversible computation have shown promises to mitigate some difficulties in software debugging, but have not been applied to the wider area of software validation. To alleviate this, we propose that relaxing reversible computing to a combination of probabilistic and inverse computation. This will create a new model that is a great candidate for mitigating the difficulties of software validation.
Polarized Lambda-Calculus at Runtime, Dependent Types at Compile Time
ABSTRACT. We present a particular two-level type theory where the object-level is a
simply-typed polarized lambda calculus with computation types (functions,
computational products) and value types (ADTs, closures), and the meta-level is
a standard dependent type theory. This appears to be an excellent setting for a
wide range of staged programming. The polarization gives us precise control over
closures and call arities, in a lightweight way. The computational products can
be used to generate well-typed mutual blocks with no overhead in downstream
compilation. We develop two applications. First a staged monad transformer
library. Second, a staged stream library, where we have very concise definitions
for zipping and concatMap with guaranteed fusion, and streams are also
parameterized over monads. For concatMap, we essentially rely on a generativity
axiom, which internally states that certain metaprograms cannot analyze object
terms. This is somewhat similar to internal parametricity statements in type
theories.