LPAR 2024:Papers with Abstracts

Papers
Abstract. We present a method and a tool, hol2dk, to fully automatically translate proofs from the proof assistant HOL-Light to the proof assistant Coq, by using Dedukti as an intermediate language. Moreover, a number of types, functions and predicates defined in HOL-Light are proved (by hand) to be equal to their counterpart in the Coq standard library. By replacing those types and functions by their Coq counterpart everywhere, we obtain a library of theorems (based on classical logic like HOL-Light) that can directly be used and applied in other Coq developments.
Abstract. This is a part of an ongoing research project, with the aim of finding the connections between properties related to theory combination in Satisfiability Modulo Theories. In pre- vious work, 7 properties were analyzed: convexity, stable infiniteness, smoothness, finite witnessability, strong finite witnessability, the finite model property, and stable finiteness. The first two properties are related to Nelson-Oppen combination, the third and fourth to polite combination, the fifth to strong politeness, and the last two to shininess. However, the remaining key property of shiny theories, namely, the ability to compute the cardinal- ities of minimal models, was not yet analyzed. In this paper we study this property and its connection to the others.
Abstract. Sorting networks are sorting algorithms that execute a sequence of operations indepen- dently of the input. Since they can be implemented directly as circuits, sorting networks are easy to implement in hardware – but they are also used often in software to improve performance of base cases of standard recursive sorting algorithms. For this purpose, they are translated into machine-code instructions in a systematic way.
Recently, a deep-learning system discovered better implementations than previously known of some sorting networks with up to 8 inputs. In this article, we show that all these examples are instances of a general pattern whereby some instructions are removed. We show that this removal can be done when a particular set of constraints on integers is satisfiable, and identify conditions where we can reduce this problem to propositional satisfiability. We systematically apply this general construction to improve the best-known implementations of sorting networks of size up to 128, which are the ones most commonly found in software implementations.
Abstract. In this paper we demonstrate how logic programming systems and Automated first- order logic Theorem Provers (ATPs) can improve the accuracy of Large Language Models (LLMs) for logical reasoning tasks where the baseline performance is given by direct LLM solutions. We first evaluate LLM reasoning on steamroller problems using the PRON- TOQA benchmark. We show how accuracy can be improved with a neuro-symbolic ar- chitecture where the LLM acts solely as a front-end for translating a given problem into a formal logic language and an automated reasoning engine is called for solving it. How- ever, this approach critically hinges on the correctness of the LLM translation. To assess this translation correctness, we secondly define a framework of syntactic and semantic er- ror categories. We implemented the framework and used it to identify errors that LLMs make in the benchmark domain. Based on these findings, we thirdly extended our method with capabilities for automatically correcting syntactic and semantic errors. For semantic error correction we integrate first-order logic ATPs, which is our main and novel contribu- tion. We demonstrate that this approach reduces semantic errors significantly and further increases the accurracy of LLM logical reasoning.
Abstract. Group polarization, the phenomenon where individuals become more extreme after in- teracting, has been gaining attention, especially with the rise of social media shaping peo- ple’s opinions. Recent interest has emerged in formal reasoning about group polarization using logical systems. In this work we consider the modal logic PNL that captures the no- tion of agents agreeing or disagreeing on a given topic. Our contribution involves enhancing PNL with advanced formal reasoning techniques, instead of relying on axiomatic systems for analyzing group polarization. To achieve this, we introduce a semantic game tailored for (hybrid) extensions of PNL. This game fosters dynamic reasoning about concrete net- work models, aligning with our goal of strengthening PNL’s effectiveness in studying group polarization. We show how this semantic game leads to a provability game by systemically exploring the truth in all models. This leads to the first cut-free sequent systems for some variants of PNL. Using polarization of formulas, the proposed calculi can be modularly adapted to consider different frame properties of the underlying model.
Abstract. We present a first-order theorem proving framework for establishing the correctness of functional programs implementing sorting algorithms with recursive data structures. We formalize the semantics of recursive programs in many-sorted first-order logic and integrate sortedness/permutation properties within our first-order formalization. Rather than focus- ing on sorting lists of elements of specific first-order theories, such as integer arithmetic, our list formalization relies on a sort parameter abstracting (arithmetic) theories and hence concrete sorts. We formalize the permutation property of lists in first-order logic so that we automatically prove verification conditions of such algorithms purely by superpositon- based first-order reasoning. Doing so, we adjust recent efforts for automating induction in saturation. We advocate a compositional approach for automating proofs by induction re- quired to verify functional programs implementing and preserving sorting and permutation properties over parameterized list structures. Our work turns saturation-based first-order theorem proving into an automated verification engine by (i) guiding automated inductive reasoning with manual proof splits and (ii) fully automating inductive reasoning in satu- ration. We showcase the applicability of our framework over recursive sorting algorithms, including Mergesort and Quicksort.
Abstract. Most high-end automated theorem provers for first order logic (FOL) split available time between short runs of a large portfolio of search strategies. These runs are typically independent and can be parallelized to exploit all the available processor cores. We explore several avenues of re-using clauses generated by earlier runs and present experimental results of their usefulness or lack thereof.
Abstract. We present our tool FuncTion-V for the automatic identification of the minimal sets of program variables that an attacker can control to ensure an undesirable program property. FuncTion-V supports program properties expressed in Computation Tree Logic (CTL), and builds upon an abstract interpretation-based static analysis for CTL properties that we extend with an abstraction refinement process. We showcase our tool on benchmarks collected from the literature and SV-COMP 2023.
Abstract. We introduce a software tool for reasoning about belief change in situations where information is received from reports and observations. Our focus is on the interaction between trust and belief. The software is based on a formal model where the level of trust in a reporting agent increases when they provide accurate reports and it decreases when they provide innaccurate reports. If trust in an agent drops below a given threshold, then their reports no longer impact our beliefs at all. The notion of accuracy is determined by comparing reports to observations, as well as to reports from more trustworthy agents. The emphasis of this paper is not on the formalism; the emphasis is on the development of the prototype system for automatically calculating the result of iterated revision problems involving trust. We present an implemented system that allows users to flexibly specify and solve complex revision problems involving reports from partially trusted sources.
Abstract. Simulation is an important aspect of model checking, serving as an invaluable pre- processing step that can quickly generate a set of reachable states. This is evident in model checking tools at the Hardware Model Checking Competitions, where Btor2 is used to represent verification problems. Recently, Btor2MLIR was introduced as a novel format for representing safety and correctness constraints for hardware circuits. It provides an executable semantics for circuits represented in Btor2 by producing an equivalent program in LLVM-IR. One challenge in simulating Btor2 circuits is the use of persistent (i.e., immutable) arrays to represent memory. Persistent arrays work well for symbolic reasoning in Smt but they require copy-on-write semantics when being simulated natively. We provide an algorithm for converting persistent arrays to transient (i.e., mutable) arrays with efficient native execution. This approach is implemented in Btor2MLIR, which opens the door for rapid prototyping, dynamic verification techniques and random testing using established tool chains such as LibFuzzer and KLEE. Our evaluation shows that our approach, when compared with BtorSim, has a speedup of three orders of magnitude when safety properties are trivial, and at least one order of magnitude when constraints are disabled.
Abstract. We introduce Virtual Integer-Real Arithmetic Substitution (Viras), a quantifier elim- ination procedure for deciding quantified linear mixed integer-real arithmetic problems. Viras combines the framework of virtual substitutions with conflict-driven proof search and linear integer arithmetic reasoning based on Cooper’s method. We demonstrate that Viras gives an exponential speedup over state-of-the-art methods in quantified arithmetic reasoning, proving problems that SMT-based techniques fail to solve.
Abstract. Hyperedge-Replacement grammars (HR) have been introduced by Courcelle in order to extend the notion of context-free sets from words and trees to graphs of bounded tree-width. While for words and trees the syntactic restrictions that guarantee that the associated languages of words resp. trees are regular - and hence, MSO-definable - are known, the situation is far more complicated for graphs. Here, Courcelle proposed the notion of regular graph grammars, a syntactic restriction of HR grammars that guarantees the definability of the associated languages of graphs in Counting Monadic Second Order Logic (CMSO). However, these grammars are not complete in the sense that not every CMSO-definable set of graphs of bounded tree-width can be generated by a regular graph grammar. In this paper, we introduce a new syntactic restriction of HR grammars, called tree-verifiable graph grammars, and a new notion of bounded tree-width, called embeddable bounded tree-width, where the later restricts the trees of a tree-decomposition to be a subgraph of the analyzed graph. The main property of tree-verifiable graph grammars is that their associated languages are CMSO-definable and that they have bounded embeddable tree-width. We show further that they strictly generalize the regular graph grammars of Courcelle. Finally, we establish a completeness result, showing that every language of graphs that is CMSO-definable and of bounded embeddable tree-width can be generated by a tree-verifiable graph grammar.
Abstract. Intersection type systems have been independently applied to different evaluation strate- gies, such as call-by-name (CBN) and call-by-value (CBV). These type systems have been then generalized to different subsuming paradigms being able, in particular, to encode CBN and CBV in a unique unifying framework. However, there are no intersection type systems that explicitly enable CBN and CBV to cohabit together, without making use of an encoding into a common target framework.
This work proposes an intersection type system for a specific notion of evaluation for PCF, called PCFH. Evaluation in PCFH actually has a hybrid nature, in the sense that CBN and CBV operational behaviors cohabit together. Indeed, PCFH combines a CBV- like behavior for function application with a CBN-like behavior for recursion. This hybrid nature is reflected in the type system, which turns out to be sound and complete with respect to PCFH: not only typability implies normalization, but also the converse holds. Moreover, the type system is quantitative, in the sense that the size of typing derivations provides upper bounds for the length of the reduction sequences to normal form. This first type system is then refined to a tight one, offering exact information regarding the length of normalization sequences. This is the first time that a sound and complete quantitative type system has been designed for a hybrid computational model.
Abstract. We propose a translation from eBPF (extended Berkeley Packet Filter) code to CHC (Constrained Horn Clause sets) over the combined theory of bitvectors and arrays. eBPF is in particular used in the Linux kernel where user code is executed under kernel privileges. In order to protect the kernel, a well-known verifier statically checks the code for any harm and a number of research efforts have been performed to secure and improve the performance of the verifier. This paper is about verifying the functional properties of the eBPF code itself. Our translation procedure bpfverify is precise and covers almost all details of the eBPF language. Functional properties are automatically verified using z3. We prove termination of the procedure and show by real world eBPF code examples that full-fledged automatic verification is actually feasible.
Abstract. We present the CheckMate tool for automated verification of game-theoretic secu- rity properties, with application to blockchain protocols. CheckMate applies automated reasoning techniques to determine whether a game-theoretic protocol model is game- theoretically secure, that is, Byzantine fault tolerant and incentive compatible. We describe CheckMate’s input format and its various components, modes, and output. CheckMate is evaluated on 15 benchmarks, including models of decentralized protocols, board games, and game-theoretic examples.
Abstract. In this paper we present the proof that there is no elementary translation from cut- free derivations in the sequent calculus variant of the epsilon calculus to LK-proofs with bounded cut-complexity. This is a partial answer to a question by Toshiyasu Arai. Fur- thermore, we show that the intuitionistic format of the sequent calculus variant of the epsilon calculus is not sound for intuitionistic logic, due to the presence of all classical quantifier-shift rules.
Abstract. In this paper, we present a general strategy that enables the translation of tableau proofs using different Skolemization rules into machine-checkable proofs. It is part of a framework that enables (i) instantiation of the strategy into algorithms for different sets of tableau rules (e.g., different logics) and (ii) easy soundness proof which relies on the local extensibility of user-defined rules. Furthermore, we propose an instantiation of this strategy for first-order tableaux that handles notably pre-inner Skolemization rules, which is, as far as the authors know, the first one in the literature. This deskolemization strategy has been implemented in the Goéland [17] automated theorem prover, enabling an export of its proofs to Coq [8] and Lambdapi [2]. Finally, we have evaluated the algorithm performances for inner and pre-inner Skolemization rules through the certification of proofs from some categories of the TPTP [39] library.
Abstract. The cvc5 solver is today one of the strongest systems for solving first order problems with theories but also without them. In this work we equip its enumeration-based instan- tiation with a neural network that guides the choice of the quantified formulas and their instances. For that we develop a relatively fast graph neural network that repeatedly scores all available instantiation options with respect to the available formulas. The network runs directly on a CPU without the need for any special hardware. We train the neural guidance on a large set of proofs generated by the e-matching instantiation strategy and evaluate its performance on a set of previously unseen problems.
Abstract. Rewriting techniques based on reduction orderings generate “just enough” consequences to retain first-order completeness. This is ideal for superposition-based first-order theorem proving, but for at least one approach to inductive reasoning we show that we are miss- ing crucial consequences. We therefore extend the superposition calculus with rewriting- based techniques to generate sufficient consequences for automating induction in satura- tion. When applying our work within the unit-equational fragment, our experiments with the theorem prover Vampire show significant improvements for inductive reasoning.
Abstract. An inductive proof can be represented as a proof schema, i.e. as a parameterized sequence of proofs defined in a primitive recursive way. A corresponding cut-elimination method, called schematic CERES, can be used to analyze these proofs, and to extract their (schematic) Herbrand sequents, even though Herbrand’s theorem in general does not hold for proofs with induction inferences. This work focuses on the most crucial part of the schematic cut-elimination method, which is to construct a refutation of a schematic formula that represents the cut-structure of the original proof schema. Moreover, we show that this new formalism allows the extraction of a structure from the refutation schema, called a Herbrand schema, which represents its Herbrand sequent.
Abstract. Recently an extension to higher-order logic — called DHOL — was introduced, enrich- ing the language with dependent types, and creating a powerful extensional type theory. In this paper we propose two ways how choice can be added to DHOL. We extend the DHOL term structure by Hilbert’s indefinite choice operator ε, define a translation of the choice terms to HOL choice that extends the existing translation from DHOL to HOL and show that the extension of the translation is complete and give an argument for soundness. We finally evaluate the extended translation on a set of dependent HOL problems that require choice.
Abstract. Certifying results by checking proofs and models is an essential feature of modern SAT solving. While incremental solving with assumptions and core extraction is crucial for many applications, support for incremental proof certificates remains lacking. We propose a proof format and corresponding checkers for incremental SAT solving. We further extend it to leverage resolution hints. Experiments on incremental SAT solving for Bounded Model Checking and Satisfiability Modulo Theories demonstrate the feasibility of our approach, further confirming that resolution hints substantially reduce checking time.
Abstract. We introduce a simple game of resource-conscious reasoning. In this two-player game, players P and O place tokens of positive and negative polarity onto a game board according to certain rules. P wins if she manages to match every negative token with a corresponding positive token.
We study this token game using methods from computational complexity and proof theory. Specifically, we show complexity results for various fragments of the game, in- cluding PSPACE-completeness of a finite restriction and undecidability of the full game featuring non-terminating plays. Moreover, we show that the finitary version of the game is axiomatisable and can be embedded into exponential-free linear logic. The full game is shown to satisfy the exponential rules of linear logic, but is not fully captured by it. Finally, we show determinacy of the game, that is the existence of a winning strategy for one of the players.
Abstract. While many of the state-of-art Automated Theorem Provers (ATP) like E and Vampire, were subject to extensive tuning of strategy schedules in the last decade, the classical ATP prover Prover9 has never been optimized in this direction. Both E and Vampire provide the user with an automatic mode to select good proof search strategies based on the properties of the input problem, while Prover9 provides by default only a relatively weak auto mode. Interestingly, Prover9 provides more varied means for proof control than its competitors. These means, however, must be manually investigated and that is possible only by experienced Prover9 users with a good understanding of how Prover9 works.
In this paper, we investigate the possibilities of automatic configuration of Prover9 for user-specified benchmark problems. We employ the automated strategy invention system Grackle to generate Prover9 strategies with both basic and advanced proof search options which require sophisticated strategy space features for Grackle. We test the strategy invention on AIM train/test problem collection and we show that Prover9 can outperform both E and Vampire on these problems. To test the generality of our approach we train and evaluate strategies also on TPTP problems, showing that Prover9 can achieve reasonable complementarity with other ATPs.
Abstract. Deep inference is a proof theoretical formalism that generalises the “shallow inference” of sequent calculus by permitting the application of inference rules on subformulae like term rewriting rules. Deep inference makes it possible to build shorter proofs than sequent calculus proofs. However, deep inference in proof search introduces higher nondeterminism, an obstacle in front of applications. Deep inference is essential for designing system BV, an extension of multiplicative linear logic (MLL) with a self-dual non-commutative operator. MLL has shallow inference systems, whereas BV is impossible with a shallow-only system. As Tiu showed, any restriction on rule depth makes a system incomplete for BV. This paper shows that any restriction that rules out shallow rules makes the system incomplete, too. Our results indicate that for system BV, shallow and deep rules must coexist for completeness. We provide extensive empirical evidence that deep inference can still be faster than shallow inference when used strategically with a proof theoretical technique for reducing nondeterminism. We show that prioritising deeper rule instances, in general, reduces the cost of proof search by reducing the size of the managed contexts, consequently providing more immediate access to shorter proofs. Moreover, we identify a class of MLL formulae with deep inference proof search times that grow linearly in the number of atoms in contrast to an exponential growth pattern with shallow inference. We introduce a large and exhaustive benchmark for MLL, with and without mix, and a proof search framework to apply various search strategies, which should be of independent interest.
Abstract. Epistemic processes describe the dynamic behaviour of multi-agent systems driven by the knowl- edge of agents, which perform epistemic actions that may lead to knowledge updates. Executing epistemic processes directly on epistemic states, given in the traditional way by pointed Kripke structures, quickly becomes computationally expensive due to the semantic update constructions. Based on an abstraction to formulæ of interest, we introduce a symbolic epistemic state representation and a notion of representable epistemic action with efficient symbolic updates. In contrast to existing work on belief or knowledge bases, our approach can handle epistemic actions modelled by arbitrary action models. We introduce an epistemic process calculus and a propositional dynamic logic for specifying process properties that can be interpreted both on the concrete semantic and the symbolic level. We show that our abstraction technique preserves and reflects behavioural properties of epistemic processes whenever processes are started in a symbolic state that is an abstraction of a semantic epistemic state.
Abstract. Adding blocked clauses to a CNF formula can substantially speed up SAT-solving, both in theory and practice. In theory, the addition of blocked clauses can exponentially reduce the length of the shortest refutation for a formula [17, 19]. In practice, it has been recently shown that the runtime of CDCL solvers decreases significantly for certain instance families when blocked clauses are added as a preprocessing step [10,22]. This fact is in contrast to, but not in contradiction with, prior results showing that Blocked- Clause Elimination (BCE) is sometimes an effective preprocessing step [14,15]. We suggest that the practical role of blocked clauses in SAT-solving might be richer than expected. Concretely, we propose a theoretical study of the complexity of Blocked-Clause Addition (BCA) as a preprocessing step for SAT-solving, and in particular, consider the problem of adding the maximum number of blocked clauses of a given arity k to an input formula F. While BCE is a confluent process, meaning that the order in which blocked clauses are eliminated is irrelevant, this is not the case for BCA: adding a blocked clause to a formula might unblock a different clause that was previously blocked. This order-sensitivity turns out to be a crucial obstacle for carrying out BCA efficiently as a preprocessing step. Our main result is that computing the maximum number of k-ary blocked clauses that can be added to an input formula F is NP-hard for every k ≥ 2.
Abstract. One of the main challenges in the area of Neuro-Symbolic AI is to perform logical reasoning in the presence of both neural and symbolic data. This requires combining heterogeneous data sources such as knowledge graphs, neural model predictions, structured databases, crowd-sourced data, and many more. To allow for such reasoning, we generalise the standard rule-based language Datalog with existential rules (commonly referred to as tuple-generating dependencies) to the fuzzy setting, by allowing for arbitrary t-norms in the place of classical conjunctions in rule bodies. The resulting formalism allows us to perform reasoning about data associated with degrees of uncertainty while preserving computational complexity results and the applicability of reasoning techniques established for the standard Datalog setting. In particular, we provide fuzzy extensions of Datalog chases which produce fuzzy universal models and we exploit them to show that in important fragments of the language, reasoning has the same complexity as in the classical setting.
Abstract. We present a number of first- and second-order extensions to SMT theories specifically aimed at representing and analyzing SQL queries with join, projection, and selection op- erations. We support reasoning about SQL queries with either bag or set semantics for database tables. We provide the former via an extension of a theory of finite bags and the latter via an extension of the theory of finite relations. Furthermore, we add the ability to reason about tables with null values by introducing a theory of nullable sorts based on an extension of the theory of algebraic datatypes. We implemented solvers for these theories in the SMT solver cvc5 and evaluated them on a set of benchmarks derived from public sets of SQL equivalence problems.
Abstract. We provide the first proof of confluence for cut elimination in multiplicative-exponential linear logic proof-nets that is not based on Newman’s lemma or strong normalization, not even indirectly. To achieve our goal, we take inspiration from Tait and Martin-Lof’s method based on parallel reduction for the lambda-calculus, even though the wilder realm of untyped proof-nets makes the proof subtler and more challenging.
Abstract. In this work, we consider two sets I and O of bounded integer variables, modeling the inputs and outputs of a program. Given a specification Post, which is a Boolean combination of linear or polynomial inequalities with real coefficients over I ∪ O, our goal is to synthesize the weakest possible pre-condition Pre and a program P satisfying the Hoare triple {Pre}P{Post}. We provide a novel, practical, sound and complete algorithm, inspired by Farkas’ Lemma and Handelman’s Theorem, that synthesizes both the program P and the pre-condition Pre over a bounded integral region. Our approach is exact and guaranteed to find the weakest pre-condition. Moreover, it always synthesizes both P and Pre as linear decision lists. Thus, our output consists of simple programs and pre- conditions that facilitate further static analysis. We also provide experimental results over benchmarks showcasing the real-world applicability of our approach and considerable performance gains over the state-of-the-art.1