Grading call-by-push-value, explicitly and implicitly
ABSTRACT. We present call-by-push-value with effects (CBPVE), a refinement of Levy's call-by-push-value (CBPV) calculus in which the types contain behavioural information about the effects of computations. CBPVE fits well into the existing literature on graded types and computational effects. We demonstrate this by providing graded call-by-value and call-by-name translations into CBPVE, and a semantics based on algebras of a graded monad.
CBPVE is designed as a standalone calculus, with explicit subtyping coercions. We use it to study the assignment of graded types to the terms of an ungraded calculus such as CBPV, using implicit coercions. To interpret such terms in a model that accounts for the grades, one has to prove a coherence result for the coercions. We show that, in the case of a graded monadic semantics, the necessary coherence result is false in general. To solve this problem, we show that a mild condition on the grades is enough to guarantee coherence, giving the first proof of a coherence result for grading coercions, and hence also the first graded monadic semantics for CBPV computations.
ABSTRACT. Skeletal call-by-need is an optimization of call-by-need evaluation also known as "fully lazy sharing": when the duplication of a value has to take place, it is first split into "skeleton", which is then duplicated, and "flesh" which is instead kept shared.
Here, we provide two cost analyses of skeletal call-by-need. Firstly, we provide a family of terms showing that skeletal call-by-need can be asymptotically exponentially faster than call-by-need in both time *and* space; it is the first such evidence, to our knowledge.
Secondly, we prove that skeletal call-by-need can be implemented efficiently, that is, with bi-linear overhead. This result is obtained by providing a new smooth presentation of ideas by Shivers and Wand for the reconstruction of skeletons, which is then smoothly plugged into the study of an abstract machine following the distillation technique by Accattoli et al.
Quantitative Types for the Functional Machine Calculus
ABSTRACT. The Functional Machine Calculus (FMC, Heijltjes 2022) extends the lambda-calculus with the computational effects of global mutable store, input/output, and probabilistic choice while maintaining confluent reduction and simply-typed strong normalization. Based in a simple call--by--name stack machine in the style of Krivine, the FMC models effects through additional argument stacks, and introduces sequential composition through a continuation stack to encode call--by--value behaviour, where simple types guarantee termination of the machine.
The present paper provides a discipline of quantitative types, also known as non-idempotent intersection types, for the FMC, in two variants. In the weak variant, typeability coincides with termination of the stack machine and with spine normalization, while exactly measuring the transitions in machine evaluation. The strong variant characterizes strong normalization through a notion of perpetual evaluation, while giving an upper bound to the length of reductions. Through the encoding of effects, quantitative typeability coincides with termination for higher-order mutable store, input/output, and probabilistic choice.
What does it take to certify a conversion checker?
ABSTRACT. We report on a detailed exploration of the properties of conversion (definitional equality) in dependent type theory, with the goal of certifying decision procedures for it. While in that context the property of normalisation has attracted the most light, we instead emphasize the importance of *injectivity* properties, showing that they alone are both crucial and sufficient to certify most desirable properties of conversion checkers.
We also explore the certification of a fully untyped conversion checker, with respect to a typed specification, and show that the story is mostly unchanged, although the exact injectivity properties needed are subtly different.
ABSTRACT. We present novel semiring semantics for abstract reduction systems (ARSs). More precisely, we provide a weighted version of ARSs, where the reduction steps induce weights from a semiring. Inspired by provenance analysis in database theory and logic, we obtain a formalism that can be used for provenance analysis of arbitrary ARSs. Our semantics handle (possibly unbounded) non-determinism and possibly infinite reductions. Moreover, we develop several techniques to prove upper and lower bounds on the weights resulting from our semantics, and show that in this way one obtains a uniform approach to analyze several different properties like termination, derivational complexity, space complexity, safety, as well as combinations of these properties.
ABSTRACT. Logically constrained simply-typed term rewriting systems (LCSTRSs) are a higher-order formalism for program analysis with support for primitive data types. The termination problem of LCSTRSs has been studied so far in the setting of full rewriting. This paper modifies the higher-order constrained dependency pair framework to prove innermost termination, which corresponds to the termination of programs under call by value. We also show that the notion of universal computability with respect to innermost rewriting can be effectively handled in the modified, innermost framework, which lays the foundation for open-world termination analysis of programs under call by value via LCSTRSs.
Completeness of the decreasing diagrams method for proving confluence of rewriting systems of the least uncountable cardinality
ABSTRACT. We show that every confluent abstract rewriting system (ARS) of the cardinality that does not exceed the first uncountable cardinal belongs to the class DCR3, i.e. the class of confluent ARS for which confluence can be proved with the the help of the decreasing diagrams method using the set of labels {0,1,2} ordered in such a way that 0<1<2 (in the general case, the decreasing diagrams method with two labels is not sufficient for proving confluence of such ARS). Under the Continuum Hypothesis this result implies that the decreasing diagrams method is sufficient for establishing confluence of ARS on many structures of interest to applied mathematics and various interdisciplinary fields (confluence of ARS on real numbers, continuous real functions, etc.).
We provide a machine-checked formal proof of a formalized version of the main result in Isabelle proof assistant using HOL logic and the HOL-Cardinals theory.
ABSTRACT. We introduce ultrarings, which simultaneously generalize commutative rings and Boolean lextensive categories. As such, they allow to blend together standard algebraic notions (from commutative algebra) and logical notions (from categorical logic), providing a unifying descriptive framework in which complexity classes over arbitrary rings (as in the Blum, Schub, Smale model) and usual, Boolean complexity classes may be captured in a uniform way.
ABSTRACT. Quantum algorithms leverage the use of quantumly-controlled data in order to achieve computational advantage. This implies that the programs use constructs depending on quantum data and not just classical data such as measurement outcomes. Current compilation strategies for quantum control flow involve compiling the branches of a quantum conditional, either in-depth or in-width, which in general leads to circuits of exponential size. This problem is coined as the branch sequentialization problem. We introduce and study a compilation technique for avoiding branch sequentialization on a language that is sound and complete for quantum polynomial time, thus, improving on existing polynomial-size-preserving compilation techniques.