Isomorphism Invariance, also called the Principle of Isomorphism and the Principle of Structuralism, is the idea that isomorphic mathematical objects share the same structure and relevant properties, or that whatever can reasonably be done with an object can also be done with an isomorphic copy
Isomorphism Invariance: “If A ≅ B and Φ(A) then Φ(B).”
Without any limitation on Φ, we may take Φ(X) to be A = X and obtain:
Isomorphism Reflection: “If A ≅ B then A = B.”
Conversely, Isomorphism Reflection implies Isomorphism Invariance by Leibniz's Identity of Indiscernibles.
Depending on how we interpret ≅ and =, these principles might be plainly false, uninspiringly true, or a foundational tenet:
In set theory, if ≅ is existence of a bijection, the principles are false.
In set theory, if ≅ is isomorphism of sets qua ∈-structures, both principles are true because Isomorphism Reflection is just the Extensionality axiom.
In type theory, if = is propositional equality and ≅ is equivalence of types, Isomorphism Reflection is (a consequence of) the Univalence axiom.
While Isomorphism Invariance is widely used in informal practice, with ≅ understood as existence of structure-preserving isomorphism, Isomorphism Reflection seems quite unreasonable because it implies bizarre statements, e.g., that there is just one set of size one. Any formal justification of the former must therefore address the tension with the latter principle.
In this talk I will review what is known about the formal treatment of the principles, recall the cardinal model of type theory by Théo Winterhalter and myself which shows that Isomorphism reflection is consistent when = is taken as judgemental equality, and discuss the possibility of having other models validating judgemental Isomorphism reflection that might be compatible with non-classical reasoning principles. I shall also touch on the possibility of a restricted form of Isomorphism reflection that would provide a satisfactory formal definition of “canonical isomorphism”.
ABSTRACT. This abstract investigates how users could be relieved from proving the termination of observably terminating computation in a formal system combining dependent types and inductive types, such as the Coq proof assistant.
ABSTRACT. Currently, cubical type theories are the only known systems which support
computational univalence. We can use computation in these systems to shortcut
some proofs, by appealing to definitional equality of sides of
equations. However, efficiency issues in existing implementations often preclude
such computational proofs, or it takes a large amount of effort to find
definitions which are feasible to compute. We investigate the although most of
our findings transfer to other systems. We propose an implementation of cubical
normalization by evaluation with the following two properties. First, costly
interval substitution can only arise from computation with glue types. Second,
when there are no free fibrant variables in the context, no computation rule
needs to evaluate all components of a cubical system.
ABSTRACT. We recover a classical model of dependent type theory and revisit it to describe a version of coercive subtyping, which we provide rules, coherences, and examples of.
LayeredTypes - Combining dependent and independent type systems
ABSTRACT. Many different type systems have been suggested over the years to ensure specific properties of a program such as resource usage (Linear Types), value predicates (Liquid Types), or correct following of a distributed protocol (Session Types).
Some of these type systems can be dependent or independent from each other. Both Liquid Types and Session Types require a base type system with primitive types (such as int, bool) to exist, but they can be orthogonal to each other, as you can have a Liquid Type whose base type is int, and a Session Type that also relies on the base value being int, ignore the liquid predicate.
In this talk, we try to address the challenge of how multiple type systems can co-exist in the same programming language, allowing different, valid combinations to be used in the type-checking of a program.
We present LayeredTypes: In LayeredTypes, developers can write programs, but can also define additional type systems as layers. Each layer defines the basic types, and how typechecking happens. Additionally, a layer may depend on another layer if it requires information (such as types or typing contexts) from another layer.
ABSTRACT. The idea is that we start with a type theory with |Π|-types and a
universe and we are going to develop a dependent combinator calculus
in the universe. The main insight is that apart from the dependent
version of |S| and |K| we now also need |u| and |Π|.
We have only started on this work. We need to show that the
abstraction algorithm works in general and has all the desired
properties. Also we need to ive a more semantic argument why the
derivation of the non-dependent version of |K| can be derived from
the dependent one.
We would like to adopt the extensionality axioms to the dependent
case. Furthermore the question is wether we can by further application
of M\"unchausian reasoning completely eliminate the outer level and
avoid variables altogether as in the non-dependent calculus.
Higher coherence equations of semi-simplicial types as n-cubes of proofs
ABSTRACT. We adopt an "n-cubes of proofs" point of view on the construction of semi-simplicial types. Proof terms of the coherence equations fit inside of an infinite collection of n-cubes, and how they may combine together is a consequence of the combinatorial structure of n-cubes.
This abstract was also submitted to the HoTT/UF 2023 Workshop in Vienna, Austria.
ABSTRACT. We combine the theory of inductive data types with the theory of universal measurings.
By doing so, we find that many categories of algebras of endofunctors are actually enriched
in the corresponding category of coalgebras of the same endofunctor. The enrichment
captures all possible partial algebra homomorphisms, defined by measuring coalgebras.
Thus this enriched category carries more information than the usual category of algebras
which captures only total algebra homomorphisms. We specify new algebras besides the
initial one using a generalization of the notion of initial algebra.
ABSTRACT. Languages like Coq and Agda forbid users to define non-strictly-positive
data types. Indeed, one could otherwise very
easily define non-terminating programs. However, this strict-positivity
criterion is nothing more than a syntactic restriction, which prevents
sometimes perfectly reasonable and innocuous data types to be defined. We
present ongoing work on making positivity checking more modular in Agda, by
allowing polarity annotations in function types and making it possible
to enforce the variance of functions simply by type checking.
ABSTRACT. A common litmus test for a language's capability for modularity is whether the
programmer is able to both extend existing data with new ways to construct it
and add new functionality for this data. All in a way that preserves static type
safety; a conundrum which otherwise known as the expression problem. In the
context of pure functional programming further modularity concerns arise from
the need to model a program's side effects explicitly using monads, whose syntax
and implementation we would ideally define separately and in a modular
fashion. In this work, we present a core calculus that acts as a minimal basis
for solving these modularity questions in a way that does not require us to work
with explicit embeddings of the initial algebra semantics of data types, and
work we towards developing a formal categorical semantics for this calculus.
Extensional Equality Preservation in Intensional MLTT
ABSTRACT. We revisit a logical principle which states for an endofunctor |F| on |Type| that its functorial |map| preserves the extensional equality of functions.
In previous work, we have advocated the use of this principle from a pragmatic perspective, for avoiding to postulate function extensionality |funExt| when proving properties of generic programs in ITT. However, while the principle is provable for functors like |Maybe| or |List|, for others, like |Reader|, it is not.
In current work we therefore try to better understand for which functors it is provable and for which not. We can show that the principle is provable in ITT for all |Traversable| functors. We also explore a |Nat|-indexed family of "equality transformation" principles that generalises the preservation principle for the case of |Reader| and helps to elucidate how it relates to function extensionality. It turns out that the instance of this family at 0 is logically equivalent to |funExt|, while we recover the preservation principle for |Reader E| at 1 and see that it proves |funExt| for functions with domain |E|.
Composable partial functions in Coq, totally for free
ABSTRACT. I propose an early-stage Coq library for general recursion. The goal is to be able to able to write functions that may be only partially defined and still prove things about them. This work follows in the footsteps of ‘Turing-completeness totally free’ and the Braga method. Indeed, we use a free monad to represent general recursion that is close to the former and we can instantiate it to total functions that can be extracted using ideas similar to the latter. Crucially, this library supports composition of such functions at virtually no cost to the user
ABSTRACT. Liquid Haskell is a refinement type checker for Haskell programs
that can also be used as a theorem prover to mechanically check user-provided proofs.
But, theorem proving in Liquid Haskell
has two main disadvantages.
First, the foundations of Liquid Haskell as a theorem prover
have neither been well studied nor formalized.
Second, although Liquid Haskell can check proofs,
it does not assist in their development.
Both these disadvantages will be addressed if
Liquid Haskell proofs are translated to Coq.
To understand this translation, we started by encoding
the ackermann function and related arithmetic theorems
in Liquid Haskell
and in (to-be-automatically-derived) Coq.
In this abstract, we present how we aim to translate
1. types, 2. functions, 3. subtyping, and 4. the refinement logic of Liquid Haskell in Coq.