19TH INTERNATIONAL CONFERENCE ON LOGIC FOR PROGRAMMING, ARTIFICIAL INTELLIGENCE AND REASONING

LPAR ON SUNDAY, DECEMBER 15TH, 2013

08:30-09:30 Session 5: Invited talk by Toby Walsh
08:30
Decomposition in SAT (or why we sometimes need constraint programming)
SPEAKER: Toby Walsh
ABSTRACT.

Decomposition of a problem into propositional satisfiability is an attractive solution method for many problems. In this talk, I discuss both positive and negative results about such decompositions. I identify some problems where we can decompose without loss and others where we cannot. On the way, I'll discuss automata, matching and a number of other fundamental ideas in theoretical computer science.

09:30-10:00Coffee Break
10:00-12:00 Session 6
10:00
System Description: E 1.8
ABSTRACT.

E is a theorem prover for full first-order logic with equality. It reduces first-order problems to clause normal form and employs a saturation algorithm based on the equational superposition calculus. E is build on shared terms with cached rewriting, and employs several innovations for efficient clause indexing. Major strengths of the system are automatic problem analysis and highly flexible search heuristics. The prover can provide verifiable proof objects and answer substitutions with very little overhead. E performs well, solving more than 69% of TPTP-5.4.0 FOF and CNF problems in automatic mode.

10:20
Proof-Pattern Recognition and Lemma Discovery in ACL2
SPEAKER: Moa Johansson
ABSTRACT.

 

We present a novel technique for combining statistical machine learning for proof-pattern recognition with symbolic methods for lemma discovery. The resulting tool, ACL2(ml), gathers proof statistics and uses statistical pattern-recognition to pre-processes data from libraries, and then suggests auxiliary lemmas in new proofs by analogy with already seen examples. This paper presents the implementation of ACL2(ml) alongside theoretical descriptions of the proof-pattern recognition and lemma discovery methods involved in it. 

 

 

10:40
A Graphical Language for Proof Strategies
SPEAKER: Gudmund Grov
ABSTRACT.

Complex automated proof strategies are often difficult to extract, visualise, modify, and debug. Traditional tactic languages, often based on stack-based goal propagation, make it easy to write proofs that obscure the flow of goals between tactics and are fragile to minor changes in input, proof structure or changes to tactics themselves. Here, we address this by introducing a graphical language called PSGraph for writing proof strategies. Strategies are constructed visually by "wiring together" collections of tactics and evaluated by propagating goal nodes through the diagram via graph rewriting. Tactic nodes can have many output wires, and use a filtering procedure based on goal-types (predicates describing the features of a goal) to decide where best to send newly-generated sub-goals. In addition to making the flow of goal information explicit, the graphical language can fulfil the role of many tacticals using visual idioms like branching, merging, and feedback loops. We argue that this language enables development of more robust proof strategies and provide several examples, along with a prototype implementation in Isabelle.

11:00
A Semantic Basis for Proof Queries and Transformations
ABSTRACT.

We extend the query language PrQL, which is designed for inspecting machine representations of proofs, to also allow transformations of these proofs. There are several novel aspects. First, PrQL natively supports hiproofs which express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven transformations enable manipulation of this structure, in particular, to transform proofs produced by interactive theorem provers into forms that assist their understanding, or that could be consumed by other tools. In this paper we motivate and define basic transformation operations, using a new abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations. We define update operations that add and remove sub-proofs, and manipulate the hierarchy to group and ungroup nodes. We show that these basic operations are well-behaved so can form a sound core for a hierarchical transformation language.

11:20
Formalization of Laplace Transform using the Multivariable Calculus Theory of HOL-Light
SPEAKER: Osman Hasan
ABSTRACT.

Algebraic techniques based on Laplace transform are widely used for solving differential equations and evaluating transfer of signals while analyzing physical aspects of many safety-critical systems. To felicitate formal analysis of these systems, we present the formalization of Laplace transform using the multivariable calculus theories of HOL Light. In particular, we use integral, differential, transcendental and topological theories of multivariable calculus to formally define Laplace transform in higher-order logic and reason about the correctness of Laplace transform properties, such as existence, linearity, frequency shifting and differentiation and integration in time domain. In order to demonstrate the practical effectiveness of this formalization, we use it to formally verify the transfer function of Linear Transfer Converter (LTC) circuit, which is a commonly used electrical circuit.

11:40
Lemma Mining over HOL Light
ABSTRACT.

Large formal mathematical libraries consist of millions of atomic inference steps that give rise to a corresponding number of proved statements (lemmas). Anologously to the informal mathematical practice, only a tiny fraction of such statements is named and re-used in later proofs by the formal mathematicians. In this work, we suggest and implement exact criteria defining the estimated usefulness of the HOL Light lemmas for proving further theorems. We use these criteria to mine the large inference graph of all lemmas in the core HOL Light library, adding thousands of the best lemmas to the pool of named statements that can be re-used in later proofs. We then evaluate the usefulness of the new lemmas by comparing the performance of automated proving of the core HOL Light theorems with and without such added lemmas.

12:00-14:00Lunch
14:00-15:30 Session 7: Temporal Logic
14:00
On Promptness in Parity Games
ABSTRACT.

Parity games are a powerful formalism for the automatic synthesis and verification of reactive systems. They are closely related to alternating omega-automata and emerge as a natural method for the solution of the mu-calculus model checking problem. Due to these strict connections, parity games are a well-established environment to describe liveness properties such as ``every request that occurs infinitely often is eventually responded''. Unfortunately, the classical form of such a condition suffers from the strong drawback that there is no bound on the effective time that separates a request from its response, i.e., responses are not promptly provided. Recently, to overcome this limitation, several parity game variants have been proposed, in which quantitative requirements are added to the classic qualitative ones.

In this paper, we make a general study of the concept of promptness in parity games that allows to put under a unique theoretical framework several of the cited variants along with new ones. Also, we describe simple polynomial reductions from all these conditions to either Buchi or parity games, which simplify all previous known procedures. In particular, they improve the complexity results of cost and bounded-cost parity games. Indeed, we provide solution algorithms showing that determining the winner of these games lies in UPTime \cap CoUPTime.

14:20
Comparison of LTL to Deterministic Rabin Automata Translators
ABSTRACT.

Increasing interest in control synthesis and probabilistic model checking caused recent development of LTL to deterministic omega-automata translation. The standard approach represented by ltl2dstar tool employs Safra's construction to determinize a Buchi automaton produced by some LTL to Buchi automata translator. Since 2012, three new translators appeared, namely Rabinizer, LTL3DRA, and Rabinizer 2. They all avoid Safra's construction and work on LTL fragments only. We compare performance and automata produced by the mentioned tools, where ltl2dstar is combined with several LTL to Buchi automata translators: besides tranditionally used LTL2BA, we also consider SPOT and LTL3BA.

14:40
Robotics, Temporal Logic and Stream Reasoning
ABSTRACT.

The area of AI robotics offers a set of fundamentally challenging problems when attempting to integrate logical reasoning functionality in such systems. The problems arise in part from the high degree of complexity in such architectures which include realtime behaviour, distribution, concurrency, various data latencies in operation and several levels of abstraction. For logic to work practically in such systems, traditional theorem proving, although important, is often not feasible for many of the functions of reasoning in such systems. In this article, we present a number of novel approaches to such reasoning functionality based on the use of temporal logic. The functionalities covered include, automated planning, stream-based reasoning and execution monitoring.

14:50
Verifying Temporal Properties in Real Models
SPEAKER: Mark Reynolds
ABSTRACT.

Based on pioneering work of Lauchli and Leonard in the 1960s, a novel and expressive formal language, Model Expressions, for describing the compositional construction of general linear temporal structures has recently been proposed. A sub-language, Real Model Expressions, is capable of specifying models over the real flow of time but its semantics are subtly different because of the specific properties of the real numbers.

Model checking techniques have been developed for the general linear Model Expressions and it was shown that checking temporal formulas against structures described in the formal language is PSPACE-Complete and linear in the length of the model expression.

In this paper we present a model checker for temporal formulas over real-flowed models. In fact the algorithm, and so its complexity, is the same as for the general linear case.

To show that this is adequate, we use a concept of temporal bisimulations and establish that it is respected by the compositional construction method. We can then check the correctness of using the general linear model checking algorithm when applied to real model expressions with their special semantics on real-flowed structures.

15:10
Expressive Path Queries on Graphs with Data
ABSTRACT.

Typical navigational languages for querying graph databases -- such as, the conjunctive regular path queries and its extensions -- are not rich enough for expressing relevant properties of the interaction between the underlying data and the topology. Two languages have been recently proposed to overcome this problem: A powerful {\em walk logic} (WL), which corresponds to the extension of FO over graph databases with explicit quantification over paths, and a simple class of {\em regular expressions with memory} (REMs), which compare the data that appears in a path while checking if the label of such path satisfies a given regular condition.

Our main contributions are the following: First, we pinpoint the exact complexity of the evaluation problem for WL, showing that it is non-elementary even in data complexity (that is, assuming queries to be fixed). This result closes a gap in the WL literature, and at the same time, rules out any practical applicability of the language. We then move to the class of REMs, which is a formalism that suffers from the opposite drawback: although REM evaluation can be solved in exponential time -- and even in polynomial time in data complexity -- the language is rather rudimentary for expressing complex properties, such as how data in different paths compare to each other. Our second contribution is the identification of a natural extension of the class of REMs, called {\em register logic} (RL), that makes up for this lack of expressiveness while retaining an elementary complexity of evaluation. We also study the limits of tractability in data complexity for fragments of RL, and how some RL properties can be expressed in a suitable extension of WL.

15:30-16:00Coffee Break
16:00-17:30 Session 8
Chair: Toby Walsh
16:00
PeRIPLO: A Framework for Producing Efficient Interpolants for SAT-based Software Verification
ABSTRACT.

Propositional interpolation is widely used as a means of over-approximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; it is thus unlikely that a single interpolation procedure could provide interpolants fit for all cases. This paper describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. We demonstrate the flexibility of the framework in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that logical strength and size of interpolants significantly affect the performance of verification, and that these characteristics indeed depend on the role interpolants play in the verification process.

16:20
Maximal Falsifiability: Definitions, Algorithms, and Applications
ABSTRACT.

Similarly to Maximum Satisfiability (MaxSAT), Minimum Satisfiability (MinSAT) is an optimization extension of the Boolean Satisfiability (SAT) decision problem. In recent years, both problems have been studied in terms of exact and approximation algorithms. In addition, the MaxSAT problem has been characterized in terms of Maximal Satisfiable Subsets (MSSes) and Minimal Correction Subsets (MCSes), as well as Minimal Unsatisfiable Subsets (MUSes) and hitting set dualization. However, and in contrast with MaxSAT, no such characterizations exist for MinSAT. This paper addresses this issue by casting the MinSAT problem in a more general framework. The paper studies Maximal Falsifiability (MaxFalse), the problem of computing a subset-maximal set of clauses that can be simultaneously falsified, and shows that MinSAT corresponds to the complement of a largest subset-maximal set of simultaneously falsifiable clauses. Additional contributions of the paper include novel algorithms for Maximum and Maximal Falsifiability, as well as hitting set dualization results for the MaxFalse problem. Moreover, the efficiency of the proposed algorithms is validated on practical instances.

16:40
On Module-based Abstraction and Repair of Behavioral Programs
SPEAKER: Guy Katz
ABSTRACT.

The number of states a program has tends to grow exponentially in the size of the code. This phenomenon, known as state explosion, hinders the verification and repair of large programs. A key technique for coping with state explosion is using abstractions, where one substitutes a program’s state graph with smaller over-approximations thereof. We show how module-based abstraction-refinement strategies can be applied to the verification of programs written in the recently proposed framework of Behavioral Programming. Further, we demonstrate how — by using a sought-after repair as a means of refining existing abstractions — these techniques can improve the scalability of existing program repair algorithms. Our findings are supported by a proof-of-concept tool.

17:00
Partial Backtracking in CDCL Solvers
SPEAKER: Chuan Jiang
ABSTRACT.

Backtracking is a basic technique of search-based satisfiability (SAT) solvers. In order to backtrack, a SAT solver uses conflict analysis to compute a backtracking level and discards all the variable assignments made between the conflicting level and the backtracking level. We observed that, due to the branching heuristics, the solver may repeat lots of previous decisions and propagations later. In this paper, we present a new backtracking strategy, which we refer to as partial backtracking. We implemented this strategy in our solver Nigma. Using this strategy, Nigma amends the variable assignments instead of discarding them completely so that it does not backtrack as many levels as the classic strategy. Our experiments show that Nigma solves 5% more instances than the version without partial backtracking.

17:20
Acceleration-based Safety Decision Procedure for Programs with Arrays
ABSTRACT.

Reachability analysis of programs with arrays is a well-known challenging problem and many existing approaches are bound to incomplete solutions. In this paper we identify a class of programs with arrays for which safety is fully decidable. The completeness result we provide is built up from acceleration techniques for arrays and the decision procedure for the Array Property Fragment class.