previous day
all days

View: session overviewtalk overview

08:00-10:00 Session 35A: Theorem Proving: Theory and Practice (ARCADE 2021)
Location: ZoomRoom 1
Proving UNSAT in SMT: The Case of Quantifier Free Non-Linear Real Arithmetic

ABSTRACT. We discuss the topic of unsatisfiability proofs in SMT, particularly with reference to quantifier free non-linear real arithmetic. We outline how the methods here do not admit trivial proofs and how past formalisation attempts are not sufficient. We note that the new breed of local search based algorithms for this domain may offer an easier path forward.

Search Spaces for Theorem Proving Strategies

ABSTRACT. We present a way to analyze the complexity of theorem proving strategies in an implementation independent manner.

On Evaluating Theorem Provers

ABSTRACT. Evaluating automatic theorem provers is a non-trivial, subtle endeavour. There are a number of conflicting issues to consider, be they computational, statistical, economic, or — most difficult of all — social. We present a review of such challenges, seen from the perspective of, and with application to, the first-order system Vampire. We further highlight the exemplary achievements of existing thought and practical tools, and sketch some future directions for work in this area. We are not the first to consider this issue and do not claim to offer the final word but believe strongly that this is a topic that requires significant and sustained attention from our community.

Relevance and Abstraction

ABSTRACT. In the past abstractions have been used as a guide to the structure of a proof. Here we propose to use them instead as a means to select relevant clauses from a large set of clauses.

08:00-09:00 Session 35E: WST Workshop: Invited Talk (WST 2021)
Location: ZoomRoom 2
Efficient Computation of Polynomial Resource Bounds for Bounded-Loop Programs

ABSTRACT. In Bound Analysis we are given a program and we seek functions that bound the growth of computed values, the running time, etc., in terms of input parameters. Problems of this type are uncomputable for ordinary, full languages, but may be solvable for weak languages, possibly representing an abstraction of a "real" program. In 2008, Jones, Kristiansen and I showed that, for a weak but non-trivial imperative programming language, it is possible to decide whether values are polynomially bounded. In this talk, I will focus on current work with Geoff Hamilton, where we show how to compute tight polynomial bounds (up to constant factors) for the same language. We have been able to do that as efficiently as possible, in the sense that our solution has polynomial space complexity, and we showed the problem to be PSPACE-hard. The analysis is a kind of abstract interpreter which computes in a domain of symbolic bounds plus just enough information about data-flow to correctly analyse loops. A salient point is how we solved the problem for multivariate bounds through the intermediary of univariate bounds. Another one is that the algorithm looks for worst-case lower bounds, but a completeness result shows that the maximum of these lower-bound functions is also an upper bound (up to a constant factor).

09:00-10:00 Session 36A: LFMTP Workshop: Meta-Theory Reasoning (LFMTP 2021)
Location: ZoomRoom 3
SMLtoCoq: Automated Generation of Coq Specifications and Proof Obligations from SML Programs with Contracts
PRESENTER: Laila El-Beheiry

ABSTRACT. Formally reasoning about functional programs is supposed to be straightforward and elegant, however, it is not typically done as a matter of course. Reasoning in a proof assistant requires "reimplementing" the code in those tools, which is far from trivial. SMLtoCoq provides an automatic translation of SML programs and function contracts into Coq. Programs are translated into Coq specifications, and function contracts into theorems, which can then be formally proved. Using the Equations plugin and other well established Coq libraries, we are able to translate SML programs without side-effects containing partial functions, structures, functors, records, among others. Additionally, we provide a Coq version of many parts of SML’s basis library, so that calls to these libraries are kept almost as is.

Countability of Inductive Types Formalized in the Object-Logic Level

ABSTRACT. The set of integer number lists with finite length, and the set of binary trees with integer labels are both countably infinite. Many inductively defined types also have countable elements. In this paper, we formalize the syntax of first-order inductive definitions in Coq and prove them countable, under some side conditions. Instead of writing a proof generator in a meta-language, we develop an axiom-free proof in the Coq object logic. Our proof is a dependently typed Coq function from the syntax of the inductive definition to the countability of the type. Based on this proof, we provide a Coq tactic to automatically prove the countability of concrete inductive types. We also developed Coq libraries for countability and for the syntax of inductive definitions, which are values in their own right.

09:00-10:00 Session 36B: WST Workshop: Classical Topics in Termination (WST 2021)
Mixed Base Rewriting for the Collatz Conjecture

ABSTRACT. We explore the Collatz conjecture and its variants through the lens of termination of string rewriting. We construct a rewriting system that simulates the iterated application of the Collatz function on strings corresponding to mixed binary-ternary representations of positive integers. Termination of this rewriting system is equivalent to the Collatz conjecture. To show the feasibility of our approach in proving mathematically interesting statements, we implement a minimal termination prover that uses the automated method of matrix/arctic interpretations and we perform experiments where we obtain proofs of nontrivial weakenings of the Collatz conjecture. Although we do not succeed in proving the Collatz conjecture, we believe that the ideas here represent an interesting new approach.

deVrijer’s Measure for SN of λ→ in Scheme

ABSTRACT. We contribute a Scheme program for deVrijer’s proof of strong normalization of the simply-typed lambda calculus.

10:30-11:30 Session 37A: TPTP World (ARCADE 2021)
Location: ZoomRoom 1
The Expansion, Modernisation, and Future of the TPTP World

ABSTRACT. The Thousands of Problems for Theorem Provers (TPTP) World is a rich infrastructure that supports the development, deployment, and application of Automated Theorem Proving (ATP) for classical logics. This paper describes proposed expansion and modernisation of some parts of the TPTP World: the TPTP Problem library, a new TDTP Data library, new logics and languages, and improved user services. Hopefully this will attract suggestions, feedback, and offers of support to help achieve these goals.

Management of the TPTP Problem Set

ABSTRACT. There is a need for help in the long term management of the TPTP problem set. We propose to discuss various possibilities for this.

10:30-12:00 Session 37D: LFMTP Workshop: Foundations (LFMTP 2021)
Location: ZoomRoom 3
Automating Induction by Reflection

ABSTRACT. Despite recent advances in automating theorem proving in full first-order theories, inductive reasoning still poses a serious challenge to state-of-the-art theorem provers. The reason for that is that in first-order logic induction requires an infinite number of axioms, which is not a feasible input to a computer-aided theorem prover requiring a finite input. Mathematical practice is to specify these infinite sets of axioms as axiom schemes. Unfortunately these schematic definitions cannot be formalized in first-order logic, and therefore not supported as inputs for first-order theorem provers. In this work we introduce a new method, inspired by the field of axiomatic theories of truth, that allows to express schematic inductive definitions, in the standard syntax of multi-sorted first-order logic. Further we test the practical feasibility the method with state- of-the-art theorem provers, comparing it to solvers’ native techniques for handling induction.

Interacting safely with an unsafe environment

ABSTRACT. We give a presentation of Pure type systems where contexts need not be well-formed and show that this presentation is equivalent to the usual one. The main motivation for this presentation is that, when we extend Pure type systems with computation rules, like in the logical framework Dedukti, we want to declare the constants before the computation rules that are needed to check the well-typedness of their type.

Foundations of the Squirrel meta-logic for reasoning over security protocols
PRESENTER: David Baelde

ABSTRACT. The formal verification of security protocols can be carried out in two categories of models. Symbolic models, pioneered by Dolev and Yao, represent messages by first-order terms and attacker capabilities by inference rules or equational theories. This approach allows automatic verification, notably using techniques from rewriting and logic. The computational model, however, represents messages as bitstrings and attackers as probabilistic polynomial-time (PTIME) Turing machines. It is the standard model for provable security in cryptography, but formal verification in that model remains challenging. Bana and Comon have recently proposed a compelling approach to derive guarantees in the computational model, which they call the computationally complete symbolic attacker (CCSA). It is based on first-order logic, with complex axioms derived from cryptographic assumptions. Verification in the original CCSA approach involves a translation of protocol traces into first-order terms that necessitates to bound the length of traces. The generated goals are difficult to process by humans and, so far, cannot be handled automatically either. We have proposed a refinement of the approach based on a meta-logic which conveniently replaces the translation step. We have implemented this refined approach in an interactive theorem prover, Squirrel, and validated it on a first set of case studies. In this paper, we present some improvements of the foundations of the Squirrel meta-logic and of its proof system, which are required as we are considering more complex case studies. We extend our model to support memory cells in order to allow the analysis of security protocols that manipulate states. Moreover, we adapt the notion of trace model, which provides the semantics of our meta-logic formulas, to enable more natural and expressive modelling. We also present a generalized proof system which allows to derive more protocol properties.

10:30-12:00 Session 37E: WST Workshop: Decidability and Analysis (WST 2021)
Polynomial Loops: Termination and Beyond

ABSTRACT. We consider triangular weakly non-linear loops (twn-loops) over subrings S of R_A, where R_A is the set of all real algebraic numbers. Essentially, the body of such a loop is a single assignment (x_1, ..., x_d) ← (c_1 · x_1 + pol_1, ..., c_d · x_d + pol_d) where each x_i is a variable, c_i ∈ S, and each pol_i is a (possibly non-linear) polynomial over S and the variables x_{i+1}, ..., x_d. We present a reduction from the question of termination on all inputs to the existential fragment of the first-order theory of S and R_A. For loops over R_A, our reduction entails decidability of termination. For loops over Z and Q, it proves semi-decidability of non-termination. Furthermore, we show that the halting problem, i.e., termination on a given input, is decidable for twn-loops over any subring of R_A . This also allows us to compute witnesses for non-termination. Moreover, we present the first computability results on the runtime complexity of such loops. More precisely, we show that for twn-loops over Z one can always compute a polynomial f such that the length of all terminating runs is bounded by f (∥(x 1 , . . . , x d )∥), where ∥ · ∥ denotes the 1-norm. This result implies that the runtime complexity of a terminating triangular linear loop over Z is at most linear.

Polynomial Termination over N is Undecidable

ABSTRACT. In this paper we prove that the problem whether the termination of a given rewrite system can be shown by a polynomial interpretation in the natural numbers is undecidable.

Modular Termination Analysis of C Programs

ABSTRACT. Termination analysis of C programs is challenging. On the one hand, the analysis needs to be precise. On the other hand, programs in practice are usually large and require substantial abstraction. In this extended abstract, we sketch an approach for modular symbolic execution to analyze termination of C programs with several functions. This approach is also suitable to handle recursive programs. We implemented it in our automated termination prover AProVE and evaluated its power on recursive and large programs.

12:30-14:00 Session 39D: WST Workshop: Runtime Complexity (WST 2021)
Location: ZoomRoom 2
Analyzing Expected Runtimes of Probabilistic Integer Programs Using Expected Sizes

ABSTRACT. We present a novel modular approach to infer upper bounds on the expected runtimes of probabilistic integer programs automatically. To this end, it computes bounds on the runtimes of program parts and on the sizes of their variables in an alternating way. To evaluate its power, we implemented our approach in a new version of our open-source tool KoAT.

Parallel Complexity of Term Rewriting Systems

ABSTRACT. In this workshop paper, we revisit the notion of parallel-innermost term rewriting. We provide a definition of parallel complexity and propose techniques to derive upper bounds on this complexity via the Dependency Tuple framework by Noschinski et al.

Between Derivational and Runtime Complexity

ABSTRACT. Derivational complexity of term rewriting considers the length of the longest rewrite sequence for arbitrary start terms, whereas runtime complexity restricts start terms to basic terms. Recently, there has been notable progress in automatic inference of upper and lower bounds for runtime complexity. I propose a novel transformation that lets an off-the-shelf tool for inference of upper or lower bounds for runtime complexity determine upper or lower bounds for derivational complexity as well. The approach is applicable to derivational complexity problems for innermost rewriting and for full rewriting. I have implemented the transformation in the tool AProVE and conducted an extensive experimental evaluation. My results indicate that bounds for derivational complexity can now be inferred for rewrite systems that have been out of reach for automated analysis thus far.

13:30-14:00 Session 40: Foundations: MetaCoq (LFMTP 2021)
Location: ZoomRoom 3
Translating Formalizations of Type Theories from Intrinsic to Extrinsic Style

ABSTRACT. Type theories can be formalized using the intrinsically (hard) or the extrinsically (soft) typed style. In large libraries of type theoretical features, often both styles are present, which can lead to code duplication and integration issues.

We define an operator that systematically translates a hard-typed into the corresponding soft-typed formulation. Even though this translation is known in principle, a number of subtleties make it more difficult than naively expected. Importantly, our translation preserves modularity, i.e., it maps structured sets of hard-typed features to correspondingly structured soft-typed ones.

We implement our operator in the MMT system and apply it to a library of type-theoretical features.

14:30-16:00 Session 41C: LFMTP Workshop: Formalization (LFMTP 2021)
Location: ZoomRoom 3
Adelfa: A System for Reasoning about LF Specifications
PRESENTER: Mary Southern

ABSTRACT. We present a system called Adelfa that provides mechanized support for reasoning about specifications developed in the Edinburgh Logical Framework or LF. Underlying Adelfa is a new logic in which typing judgements in LF serve as atomic formulas and quantification is permitted over contexts and terms that appear in such formulas. Contexts, which constitute type assignments to uniquely named variables that are modelled using the technical device of nominal constants, are characterized in the logic by context schemas that describe their inductive structure. We present these formulas and an associated semantics before sketching a proof system for constructing arguments that are sound with respect to the semantics. We then outline the realization of this proof system in Adelfa and illustrate its use through a few example proof developments. We conclude the paper by relating Adelfa to existing systems for reasoning about LF specifications.

A logical framework with a graph meta-language
PRESENTER: Bruno Cuconato

ABSTRACT. We conjecture that the relative unpopularity of logical frameworks among practitioners is partly due to their complex meta-languages, which often demand both programming skills and theoretical knowledge of the meta-language in question for them to be fruitfully used. We present ongoing work on a logical framework with a meta-language based on graphs. A simpler meta-language leads to a shallower embedding of the object language, but hopefully leads to easier implementation and usage. A graph-based logical framework also opens up interesting possibilities in time and space performance, by the use of heavily optimized graph databases as backends, and by proof compression algorithms geared towards graphs. Deductive systems can be specified using simple domain specific languages that build on top of the graph database's query language, and there is support for both interactive (through a web-based interface) and semiautomatic (following user-defined tactics specified by a domain-specific language) proof modes. We have so far implemented nine systems for propositional logics, including Fitch-style and backwards-directed Natural Deduction systems for intuitionistic and classical logic (with the classical systems reusing the rules of the intuitionistic ones), and a Hilbert-style system for the K modal logic.

Towards a Semantic Framework for Policy Exchange in the Internet

ABSTRACT. The major issue addressed in {\it autonomous\/} data source management, notably within the framework of data exchange, is the variety of heterogeneous data schemas --- the {\it forms\/} the data must take. But semantics that give meaning to the factual data can be autonomous as well. In certain context such as Internet routing that lacks a central authority, both the data item --- distributed routing tables that collectively constitute a global path, and the semantic data --- polices that determine the selection of routing path, are independently managed by networks (autonomous systems (ASes)) participating in the Internet. A long standing networking problem is inflexible policy routing due to the lack of policy sharing mechanisms. This paper seeks to extend the classic data exchange framework to semantic data as a means to manage distributed, autonomous routing polices in the Internet. Specifically, we make the case of knowledge exchange, and use policy exchange as a driving problem --- a special case --- to understand the semantic framework of knowledge exchange: We identify several unique challenges, and illustrate how answer set programming --- originally developed for knowledge representation with negation and non-monotonic reasoning, and residue method based transformation --- originally developed for semantic query optimization, may serve as a point solution to the specific policy exchange problem.

14:30-16:30 Session 41D: WST Workshop: Term Rewriting and Termination Arguments (WST 2021)
Location: ZoomRoom 2
Did Turing Care of the Halting Problem?

ABSTRACT. The formulation and undecidability proof of the *halting problem* is usually attributed to Turing's 1936 landmark paper. In 2004, though, Copeland noticed that it was so named and, apparently, first stated in a 1958 book by Martin Davis. Indeed, in his paper Turing payed no attention to halting machines. Words (or prefixes) like ``halt(ing)'', ``stop'' or ''terminat(e,ing)'' do not occur in the text. Turing partitions his machines into two classes of *non-halting machines*, one of them able to produce the kind of real numbers he was interested in. His notion of computation did not require termination. His decidability results concerned the classification of his machines (*satisfactoriness* problem) and their `productivity' (*printing* problem). No attempt to formulate or prove the halting problem is made. Other researchers were concerned with these issues, though. We briefly discuss their role in formulating what we currently understand as the *halting problem*.

Formalizing Higher-Order Termination in Coq

ABSTRACT. We describe a formalization of higher-order rewriting theory and formally prove that an AFS is strongly normalizing if it can be interpreted in a well-founded domain. To do so, we use Coq, which is a proof assistant based on dependent type theory. Using this formalization, one can implement several termination techniques, like the interpretation method or dependency pairs, and prove their correctness. Those implementations can then be extracted to OCaml, which results in a verified termination checker.

Observing Loopingness

ABSTRACT. In this paper, we consider non-termination in logic programming and in term rewriting and we recall some well-known results for observing it. Then, we instantiate these results to loopingness, a simple form of non-termination. We provide a bunch of examples that seem to indicate that the instantiations are correct as well as partial proofs.

Loops for which Multiphase-Linear Ranking Functions are Sufficient

ABSTRACT. In this paper, we are interested in termination analysis of Single-path Linear-Constraint loops (SLC loops) using Multiphase-Linear Ranking Functions (M{\Phi}RFs for short), in particular, in identifying sub-classes of SLC loops for which M{\Phi}RFs are sufficient, i.e., they terminate iff they have M{\Phi}RFs. We prove this result for two kinds of loops: Octagonal relations and Affine relations with the finite-monoid property.