LPAR 2024C:Papers with Abstracts

Abstract. Continuous double auctions are commonly used to match orders at currency, stock, and commodities exchanges. A verified implementation of continuous double auctions is a useful tool for market regulators as they give rise to automated checkers that are guaranteed to detect errors in the trade logs of an existing exchange if they contain trades that violate the matching rules. We provide an efficient and formally verified implementation of continuous double auctions that takes O(nlogn) time to match n orders. This improves an earlier O(n2) verified implementation. We also prove a matching Ω(nlogn) lower bound on the running time for continuous double auctions. Our new implementation takes only a couple of minutes to run on ten million randomly generated orders as opposed to a few days taken by the earlier implementation. Our new implementation gives rise to an efficient automatic checker.
We use the Coq proof assistant for verifying our implementation and extracting a verified OCaml program. While using Coq’s standard library implementation of red-black trees to obtain our improvement, we observed that its specification has serious gaps, which we fill in this work; this might be of independent interest.
Abstract. Choices in the semantics and the signature of a theory are integral in determining how the theory is used and how challenging it is to reason over it. Our interest in this paper lies in the SMT theory of sequences. Various versions of it exist in the literature and in state-of-the-art SMT solvers, but it has not yet been standardized in the SMT-LIB. We reflect on its existing variants, and we define a set of theory design criteria to help determine what makes one variant of a theory better than another. The criteria we define can be used to appraise theory proposals for other theories as well. Based on these criteria, we propose a series of changes to the SMT theory of sequences as a contribution to the discussion regarding its standardization.
Abstract. The various possible semantics of Dijkstra’s guarded commands are elucidated by means of a new graph-based low-level, generic specification language designed for asynchronous nondeterministic algorithms. The graphs incorporate vertices for actions and edges indicating control flow alongside edges for data flow. This formalism allows one to specify, not just input-output behavior nor only what actions take place, but also precisely how often and in what order they transpire.
Abstract. Algorithm certification or program verification have an increasing importance in the current technological landscape, due to the sharp increase in the complexity of software and software using systems and the high potential of adverse effects in case of failure. For instance robots constitute a particular class of systems that can present high risks of such failures. Sorting on the other hand has a growing area of applications, in particular the ones where organizing huge data collections is critical, as for instance in environmental applications.
We present an experiment in formal certification of an original version of the Bubble- Sort algorithm that is functional and tail recursive. The certification is performed in parallel both in Theorema and in Coq, this allows to compare the characteristics and the performance of the two systems. In Theorema the proofs are produced automatically in natural style (similar to human proofs), while in Coq they are based on scripts. However, the background theory, the algorithms, and the proof rules in Theorema are composed by the user without any restrictions – thus error prone, while in Coq one can only use the theories and the proof rules that are rigurously checked by the system, and the algorithms are checked for termination.
The goal of our experiments is to contribute to a better understanding and estimation of the complexity of such certification tasks and to create a basis for further increase of the level of automation in the two systems and for their possible integration.
Abstract. We demonstrate utility of generic automated reasoning (AR) methods in the com- putational topology domain, evidencing the benefit of the use of existing AR machinery within the domain on the one hand, whilst providing a pathway into a rich playground with potential to drive future AR requirements. We also progress towards quantum soft- ware verification contribution, via a recent proposal to use tangles as a representation of a certain class of quantum programs. The general methodology is, roughly speaking, to transform tasks of equivalence of topological objects (tangles) into equivalence of algebraic objects (pointed quandles) and those in turn translate into AR tasks. To enhance trust in automated checks, this can be followed by interpretation of AR outputs as human- readable output, either in symbolic algebraic form suitable for domain experts or ideally in visual topological form, potentially suitable for all. We provide formalisation via an appropriate class of labelled tangles (suitable for Quantum Verification) with associated algebraic invariants (pointed involutory quandles) and translate equivalence checking of these invariants to equational reasoning tasks. Furthermore, subsequent to automated proof creation for simple quantum verification (QV) examples, we demonstrate manual extraction of visual proof rules and visual equivalence, utilising proof graphs as a bridging step, progressing towards the automation of human-readable visual proofs.
Abstract. We give an evaluation system for the admissibility of rules in intuitionistic proportional logic. The system is based on recent work on the proof-theoretic semantics of IPL.
Abstract. We introduce and discuss a term-rewriting technique to convert numbers from any nu- meric base to any other one without explicitly appealing to division. The rewrite system is purely local and conversion has a quadratic complexity, matching the one of the standard base-conversion algorithm. We prove confluence and termination, and give an implemen- tation and benchmarks in the Dedukti type checker. This work extends and generalizes a previous work by Delahaye, published in a vulgarization scientific review.
Abstract. For tutorial purposes we realized a propositional prover in the Theorema system that works in natural style and is based on sequent calculus with unit propagation.
The version of the sequent calculus that we use is a reductionist one: at each step a formula is decomposed according to a fixed rule attached to its main logical connective. Optionally the prover may use unit propagation. Unit propagation in sequent calculus is a novel inference method introduced by the author that consists in propagating literals that occur either as antecedents or postcedents in the sequent: all occurrences of the corresponding variable in any of the other formulae are replaced by the appropriate truth value and the respective formula is simplified by rewriting.
By natural style we understand a style similar to human activity. This applies to syntax of formulae, to inference steps, and to proof presentation. Although based on sequent calculus, the prover does not produce proofs as sequent proof trees, but as natural–style narratives.
The purpose of the tool is tutorial for the understanding of natural–style proving, of sequent calculus, and of the implementation in the Theorema system as a set of rewrite rules for the inferences and a set of accompanying patterns for the explanatory text produced by the prover.
Abstract. Inductive proofs can be represented as a proof schemata, i.e. as a parameterized se- quence of proofs defined in a primitive recursive way. Applications of proof schemata can be found in the area of automated proof analysis where the schemata admit (schematic) cut-elimination and the construction of Herbrand systems. This work focuses on the ex- pressivity of proof schemata as defined in [10]. We show that proof schemata can simulate primitive recursive arithmetic as defined in [12]. Future research will focus on an extension of the simulation to primitive recursive arithmetic using quantification as defined in [7]. The translation of proofs in arithmetic to proof schemata can be considered as a crucial step in the analysis of inductive proofs.
Abstract. We present recent developments in the applications of automated theorem proving in the investigation of the Andrews-Curtis conjecture. We demonstrate previously unknown simplifications of groups presentations from a parametric family MSn(w∗) of trivial group presentations for n = 3, 4, 5, 6, 7, 8 (subset of well-known Miller-Schupp family). Based on the human analysis of these simplifications we formulate two conjectures on the structure of simplifications for the infinite family MSn(w∗), n ≥ 3.
This is an extended and updated version of the abstract [11] presented at AITP 2023 conference.
Abstract. LPTP (Logic Program Theorem Prover) is an interactive natural-deduction-based the- orem prover for pure Prolog programs with negation as failure, unification with the occurs check, and a restricted but extensible set of built-in predicates. With LPTP, one can formally prove termination and partial correctness of such Prolog programs. LPTP was designed in the mid 90’s by Robert F. Sta ̈rk. It is written in ISO-Prolog and comes with an Emacs user-interface.
From a theoretical point of view, in his publications about LPTP, Sta ̈rk associates a set of first-order axioms IND(P) to the considered Prolog program P. IND(P) contains the Clark’s equality theory for P, definitions of success, failure and termination for each user-defined logic procedure in P , axioms relating these three points of view, and an axiom schema for proving inductive properties. LPTP is thus a dedicated proof editor where these axioms are hard-wired.
We propose to explicit these axioms as first-order formulas (FOFs), and apply auto- mated theorem provers to check the property of interest. Using FOF as an intermediary language, we experiment the use of automated theorem provers for Prolog program veri- fication. We evaluate the approach over a benchmark of about 400 properties of Prolog programs from the library available with LPTP. Both the compiler which generates a set of FOF files from a given input Prolog program together with its properties and the benchmark are publicly available.
Abstract. Symbolic derivatives of extended regular expressions or regexes provide a mechanism for incremental unfolding of regexes into symbolic automata. The underlying representation is in form of so called transition regexes, that are particularly useful in the context of SMT for supporting lazy propagation of Boolean operations such as complement. Here we introduce symbolic derivatives of further operations on regexes, including fusion and lookaheads, that are currently not supported in the context of SMT but have been used in many application domains ranging from matching to temporal logic specifications. We argue that those operators could also be supported in the context of SMT and motivate why such direct support would be beneficial.
Abstract. We introduce a notion of extensional metamodeling that can be used to extend knowl- edge representation languages, and show that this feature does not increase computational complexity of reasoning in many cases. We sketch the relation of our notion to various existing logics with metamodeling and to non-wellfounded sets, and discuss applications. We also comment on the usability of black-box reductions to develop reasoning algorithms for metamodeling.