A Theorem Prover Based Approach for SAT-Based Model Checking Certification
ABSTRACT. In the field of formal verification, certifying proofs serve as compelling evidence to demonstrate the correctness of a model within a deductive system. These proofs can be automatically generated as a by-product of the verification process and are key artefacts for high-assurance systems. Their significance lies in their ability to be independently verified by proof checkers, which provides a more convenient approach than certifying the tools that generate them.
Modern model checking algorithms adopt deductive methods and usually generate proofs in terms of inductive invariants, assuming that these apply to the original system under verification. Model checkers, though, often make use of a range of complex pre-processing simplifications and transformations to ease the verification process, which add another layer of complexity to the generation of proofs.
In this paper, we present a novel approach for certifying model checking results exploiting a theorem prover and a theory of temporal deductive rules that can support various kinds of transformations and simplification of the original circuit.
We implemented and experimentally evaluated our contribution on invariants generated using two state-of-the-art model checkers, nuXmv and PdTRAV, and by defining a set of rules within a theorem prover, to validate each certificate.
Infinite State Model Checking by Learning Transitive Relations
ABSTRACT. We propose a new approach for proving safety of infinite state systems. It extends the analyzed system by transitive relations until its diameter D becomes finite, i.e., until constantly many steps suffice to cover all reachable states, irrespective of the initial state. Then we can prove safety by checking that no error state is reachable in D steps. To deduce transitive relations, we use recurrence analysis. While recurrence analyses can usually find conjunctive relations only, our approach also discovers disjunctive relations by combining recurrence analysis with projections. An empirical evaluation of the implementation of our approach in our tool LoAT shows that it is highly competitive with the state of the art.
Relatively Complete and Efficient Partial Quantifier Elimination
ABSTRACT. Quantifier elimination is used in various automated reasoning tasks, including quantified SMT solving, exists/forall solving, program synthesis, model checking, and constrained Horn clause (CHC) solving. Complete quantifier elimination, however, is computationally intractable for many theories. The recent algorithm QEL shows a promising approach to approximate quantifier elimination, which has resulted in improvements in solver performance. QEL performs partial quantifier elimination with a completeness guarantee that depends on a certain semantic property of the given formula. Considerably generalizing the previous approach, we identify a subclass of local theories in which partial quantifier elimination can be performed efficiently. We present T-QEL a parametrized polynomial time algorithm that is a sound extension of QEL and is relatively complete for this class of theories. The algorithm utilizes the proof theoretic characterization of the theories, which is based on restricted derivations. Finally, we prove for T-QEL, soundness in general, and relative completeness with respect to the identified class of theories.
ABSTRACT. Second-order quantifier-elimination is the problem of finding, given a formula with second-order quantifiers, a logically equivalent first-order formula.
While such formulas are not computable in general, there are practical algorithms and subclasses with applications throughout computational logic.
One of the most prominent algorithms for second-order quantifier elimination is the SCAN algorithm which is based on saturation theorem proving.
In this paper we show how the SCAN algorithm on clause sets can be extended to solve a more general problem: namely, finding an instance of the second-order quantifiers that results in a logically equivalent first-order formula.
In addition we provide a prototype implementation of the proposed method.
This work paves the way for applying the SCAN algorithm to new problems in application domains such as modal correspondence theory, knowledge representation, and verification.
ABSTRACT. Redundancy elimination is one of the crucial ingredients of efficient saturation-based proof search. We improve redundancy elimination by introducing a new notion of redundancy, based on partial clauses and redundancy formulas, which is more powerful than the standard notion: there are both clauses and inferences that are redundant when we use our notions and not redundant when we use standard notions. In a way, our notion blurs the distinction between redundancy at the level of inferences and redundancy at the level of clauses. We present a superposition calculus PaRC on partial clauses. Our calculus is refutationally complete and is strong enough to capture some standard restrictions of the superposition calculus. We discuss the implementation of the calculus in the theorem prover Vampire. Our experiments show the power of the new approach: we were able to solve 24 TPTP problems not previously solved by any prover, including previous versions of Vampire.
ABSTRACT. The superposition calculus for reasoning in first-order logic with equality relies on simplification orderings on terms. Modern saturation provers use the Knuth-Bendix order (KBO) and the lexicographic path order (LPO) for discovering redundant clauses and inferences. Implementing term orderings is however challenging. While KBO comparisons can be performed in linear time and LPO checks in quadratic time, using the best known algorithms for these orders is not enough. Indeed, our experiments show that for some examples term ordering checks may use about 98% of the overall proving time. The reason for this is that some equalities that cannot be ordered can become ordered after applying a substitution (post-ordered), and we have to check for post-ordering repeatedly for the same equalities.
In this paper, we show how to improve post-ordering checks by introducing a new data structure called term ordering diagrams, in short TODs, which creates an index for these checks. We achieve efficiency by lazy modifications of the index and by storing and reusing information from previously performed checks to speed up subsequent checks. Our experiments demonstrate efficiency of TODs.