VSL 2014: VIENNA SUMMER OF LOGIC 2014
IJCAR ON TUESDAY, JULY 22ND, 2014
Days:
previous day
all days

View: session overviewtalk overviewside by side with other conferences

10:45-13:05 Session 149C: FLoC Inter-Conference Topic: SAT/SMT/QBF (joint with CAV)
Location: FH, Hörsaal 1
10:45
Monadic Decomposition
SPEAKER: Margus Veanes

ABSTRACT. Monadic predicates play a prominent role in many decidable cases, including decision procedures for symbolic automata. We are here interested in discovering whether a formula can be rewritten into a Boolean combination of monadic predicates. Our setting is quantifier free formulas over a decidable background theory, such as linear arithmetic, and we here develop a semi-decision procedure for extracting a monadic decomposition of a formula when it exists.

11:05
A DPLL(T) Theory Solver for a Theory of Strings and Regular Expressions

ABSTRACT. An increasing number of applications in verification and security rely or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about (some fragment) of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language such as, for instance, length bounds on all string variables. These solvers are based on reductions to satisfiability problems over other data types, such as bit vectors, or to decision problems over automata. In this paper, we present a set of algebraic techniques for solving constraints over the theory of (unbounded) strings natively, without reduction to other problems. Our techniques can be used to construct string theory solvers that can be integrated into general, multi-theory SMT solvers based on the DPLL(T) architecture. We have implemented these techniques in our SMT solver CVC4 to expand its set of built-in data types to strings with concatenation, length, and membership in regular languages. This implementation makes CVC4 the first solver able to accept a rich set of mixed constraints over strings, integers, reals, arrays and algebraic data types. Our initial experimental results show that, in addition, over pure string problems CVC4 is highly competitive in a number ways with specialized string solvers.

11:25
Bit-Vector Rewriting with Automatic Rule Generation

ABSTRACT. Rewriting is essential for efficient bit-vector SMT solving. The rewriting algorithm commonly used by modern SMT solvers iteratively applies a set of ad hoc rewriting rules hard-coded into the solver to simplify the given formula at the preprocessing stage. This paper proposes a new approach to rewriting. In our approach, the solver starts each invocation with an empty set of rewriting rules. The set is extended by applying at run-time an automatic SAT-based algorithm for new rewriting rule generation. The set of rules differs from instance to instance. We implemented our approach in the framework of an algorithm for equivalence and constant propagation, called 0-saturation, which we extended from purely propositional reasoning to bit-vector reasoning. Our approach results in a substantial performance improvement in a state-of-the-art SMT solver over various SMT-LIB families.

11:45
A Tale of Two Solvers: Eager and Lazy Approaches to Bit-vectors

ABSTRACT. The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver. We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.

12:05
AVATAR: The New Architecture for First-Order Theorem Provers

ABSTRACT. This paper describes a new architecture for first-order resolution and superposition theorem provers called AVATAR (Advanced Vampire Architecture for Theories and Resolution). Its original motivation comes from the problem well-studied in the past --- dealing with problems having clauses containing propositional variables and other clauses that can be split into components with disjoint sets of variables. Such clauses are common for problems coming from applications, for example in verification and program analysis, where many ground literals occur in the problems and even more are generated during the proof-search.

This problem was previously studied by adding various versions of splitting. The addition of splitting resulted in considerable improvements in performance of theorem provers. However, even with various versions of splitting implemented, the performance of superposition theorem provers is nowhere near SMT solvers on purely ground problems or SAT solvers on propositional problems.

This paper describes a new, revolutionary architecture for superposition theorem provers, where a superposition theorem prover is tightly integrated with a SAT or an SMT solver. Its implementation in our theorem prover Vampire resulted in drastic improvements over all previous implementation of splitting. Over four hundred TPTP problems previously unsolvable by any modern prover, including Vampire itself, have been proved, most of them with short runtimes. Many problems solved with one of 481 variants of splitting previously implemented in Vampire can also be solved with AVATAR.

We also believe that AVATAR is ideally suited for applications where reasoning with both quantifiers and theories is required.

12:25
Automating Separation Logic with Trees and Data
SPEAKER: unknown

ABSTRACT. Separation logic (SL) is a widely used formalism for verifying heap manipulating programs. Existing SL solvers focus on decidable fragments for list-like structures. More complex data structures such as trees are typically unsupported in implementations, or handled by incomplete heuristics. While complete decision procedures for reasoning about trees have been proposed, these procedures suffer from high complexity, or make global assumptions about the heap that contradict the separation logic philosophy of local reasoning. In this paper, we present a fragment of classical first-order logic for local reasoning about tree-like data structures. The logic is decidable in NP and the decision procedure allows for combinations with other decidable first-order theories for reasoning about data. Such extensions are essential for proving functional correctness properties. We have implemented our decision procedure and, building on earlier work on translating SL proof obligations into classical logic, integrated it into an SL-based verification tool. We successfully used the tool to verify functional correctness of tree-based data structure implementations.

12:45
A Nonlinear Real Arithmetic Fragment
SPEAKER: Ashish Tiwari

ABSTRACT. We present a new procedure for testing satisfiability (over the reals) of a conjunction of polynomial equations. There are three possible return values for our procedure: it either returns a model for the input formula, or it says that the input is unsatisfiable, or it fails because the applicability condition for the procedure, called the eigen-condition, is violated. For the class of constraints where the eigen-condition holds, our procedure is a decision procedure. The design of the procedure is motivated by the problems arising from exists-forall problems -- generated by template-based verification and synthesis methods -- that are translated into exists problems using the Positivstellensatz. Our procedure can be seen as a generalized SAT solver: in particular, the eigen-condition holds on nonlinear real arithmetic encodings of SAT problems. We experimentally evaluate the procedure and describe transformations that can potentially turn a problem into another one where the eigen-condition holds.

12:55
Yices 2.2

ABSTRACT. Yices is an efficient SMT solver developed by SRI International. The first version of Yices, was released in 2006 and has been continuously updated since then. In 2007, we started a complete re-implementation of the Yices solver intended to achieve grater performance, modularity, and flexibility. We describe the latest release of Yices, namely Yices 2.2. We present the tool's architecture and algorithms it implements, and latest developements such as support for the SMT2 notation and various performance improvements. Like its predecessors, Yices 2.2 is distributed free-of-charge for non-commercial use.

10:45-13:00 Session 149E: Invited talk & Complexity
Location: FH, Hörsaal 5
10:45
Structured Search and Learning
11:45
The Complexity of Theorem Proving in Circumscription and Minimal Entailment
SPEAKER: unknown

ABSTRACT. We provide the first comprehensive proof-complexity analysis of different proof systems for propositional circumscription. In particular, we investigate two sequent-style calculi: MLK defined by Olivetti (J. Autom. Reasoning, 1992) and CIRC introduced by Bonatti and Olivetti (ACM ToCL, 2002), and the tableaux calculus NTAB suggested by Niemelä (TABLEAUX, 1996).

In our analysis we obtain exponential lower bounds for the proof size in NTAB and CIRC and show a polynomial simulation of CIRC by MLK. This yields a chain NTAB < CIRC < MLK of proof systems for circumscription of strictly increasing strength with respect to lengths of proofs.

12:15
Visibly Linear Temporal Logic
SPEAKER: unknown

ABSTRACT. We introduce a robust and tractable temporal logic, we call Visibly Linear Temporal Logic (VLTL), which captures the full class of Visibly Pushdown Languages. The novel logic avoids fix points and provides instead natural temporal operators with simple and intuitive semantics. We prove that the complexities of the satisfiability and pushdown model checking problems are the same as for other well known logics, like CARET and the nested word temporal logic NWTL, which in contrast are strictly more limited in expressive power than VLTL. Moreover, formulas of CARET and NWTL can be easily and inductively translated in linear-time into VLTL.

14:30-16:00 Session 151E: Description Logic
Location: FH, Hörsaal 5
14:30
Count and Forget: Uniform Interpolation of SHQ-Ontologies
SPEAKER: unknown

ABSTRACT. We propose a method for forgetting concept and non-transitive roles symbols of SHQ-ontologies, or for computing uniform interpolants in SHQ. A uniform interpolant of an ontology over a specified set of symbols is a new ontology that only uses the specified symbols, and preserves all logical entailments that can be expressed in SHQ using these symbols. Uniform interpolation has applications in ontology reuse, information hiding and ontology analysis, but so far no method for computing uniform interpolants for expressive description logics with number restrictions has been developed. Our results are not only interesting because they allow to compute uniform interpolants of ontologies that are in a more expressive language. Using number restrictions also allows to preserve more information in uniform interpolants of ontologies in less complex logics, such as ALC or EL. The presented method computes uniform interpolants on the basis of a new resolution calculus for SHQ-ontologies, that allows to derive consequences in a goal-oriented manner. The output of our method is expressed using SHQmu, which is SHQ extended with fixpoint operators, to always enable a finite representation of the uniform interpolant.

15:00
Coupling Tableau Algorithms for Expressive Description Logics with Completion-Based Saturation Procedures

ABSTRACT. Nowadays, saturation-based reasoners for the OWL EL profile are able to handle large ontologies such as SNOMED very efficiently. However, saturation-based reasoning procedures become incomplete if the ontology is extended with axioms that use features of more expressive Description Logics, e.g., disjunctions. Tableau-based procedures, on the other hand, are not limited to a specific OWL profile, but even highly optimised reasoners might not be efficient enough to handle large ontologies such as SNOMED. In this paper, we present an approach for tightly coupling tableau- and saturation-based procedures that we implement in the OWL DL reasoner Konclude. Our detailed evaluation shows that this combination significantly improves the reasoning performance on a wide range of ontologies.

15:30
EL-ifying Ontologies
SPEAKER: unknown

ABSTRACT. The OWL 2 EL profile is a fragment of the ontology language OWL 2 for which standard reasoning tasks are feasible in polynomial time. Many OWL ontologies, however, fall outside the EL profile and hence profile-specific reasoners are not applicable. Such ontologies, however, typically contain a small number of axioms outside EL, which may have little or no influence on reasoning outcomes. In this paper, we investigate techniques for rewriting non-EL axioms into EL that preserve the outcomes of classification and instance queries and which are applicable to the DL SHOIQ. Furthermore, for Horn ontologies, we devise a technique inspired by EL-reasoning that can be used to reduce the size of pre-models in (hyper)tableau reasoning. We have implemented and extensively tested our techniques with very encouraging results.

16:30-18:10 Session 153B: Knowledge Representation & Reasoning
Location: FH, Hörsaal 5
16:30
The Bayesian Description Logic BEL

ABSTRACT. We introduce the probabilistic Description Logic BEL. In BEL, axioms are required to hold only in an associated context. The probabilistic component of the logic is given by a Bayesian network (BN) that describes the joint probability distribution of the contexts. We study the main reasoning problems in this logic; in particular, we (i) prove that deciding positive and almost-sure entailments is not harder for BEL than for the BN, and (ii) show how to compute the probability, and the most likely context for a consequence.

17:00
Otter proofs of theorems in Tarskian geometry
SPEAKER: unknown

ABSTRACT. We report on a project to use Otter to find proofs of the theorems in Tarskian geometry proved in Szmielew's part (Part I) of the book by Schwabhäuser, Szmielew, and Tarski. These theorems start with fundamental properties of betweenness, and end with the development of geometric definitions of addition and multiplication that permit the representation of models of geometry as planes over Euclidean fields, or over real-closed fields in the case of full continuity. They include the four challenge problems left unsolved by Quaife, who two decades ago found some Otter proofs in Tarskian geometry. Our methodology was to present each theorem to Otter as an isolated problem, giving it the (negated) theorem as a goal, and the previous theorems (and the axioms) to use. In practice, we had to coax Otter a bit more.

Quaife's four challenge problems were: every line segment has a midpoint; every segment is the base of some isosceles triangle; the outer Pasch axiom (assuming inner Pasch as an axiom); and the first outer connectivity property of betweenness. These are to be proved without any parallel axiom and without even line-circle continuity. These are difficult theorems, the first proofs of which were the heart of Gupta's Ph.~D. thesis under Tarski. Otter proved them all in 2012. Our success, we argue, is due to improvements in techniques of automated deduction, rather than to increases in computer speed and memory.

The theory of Hilbert (1899) can be translated into Tarski's language, interpreting lines as pairs of distinct points, and angles as ordered triples of non-collinear points. Under this interpretation, the axioms of Hilbert either occur among, or are easily deduced from, theorems in the first 11 (of 16) chapters of Szmielew. At the time of writing this abstract, we have proved all but the two axioms about angles, III.3 and III.4, and we will finish that soon. Narboux and Braun have recently checked these same proofs in Coq.

17:30
NESCOND: an Implementation of Nested Sequent Calculi for Conditional Logics
SPEAKER: unknown

ABSTRACT. In this paper we present NESCOND, a theorem prover for normal conditional logics. NESCOND implements some recently introduced NESted sequent calculi for propositional CONDitional logics CK and some of its significant extensions with axioms ID, MP and CEM. NESCOND also deals with the flat fragment (i.e., without nested conditionals) of the conditional logic CK+CSO+ID, which corresponds to the cumulative logic C introduced by Kraus, Lehmann and Magidor. NESCOND is inspired to the methodology of leanTAP and it is implemented in Prolog. The paper also shows some experimental results, witnessing that the performances of NESCOND are promising. NESCOND is available at http://www.di.unito.it/~pozzato/nescond/

17:50
Knowledge Engineering for Large Ontologies with Sigma KEE 3.0
SPEAKER: unknown

ABSTRACT. The Suggested Upper Merged Ontology (SUMO) is a large, comprehensive ontology stated in higher-order logic. It has co-evolved with a development environment called the Sigma Knowledge Engineering Environment (SigmaKEE). SUMO has been applied to commercial applications in concept extraction and metadata verification. A large and important subset of SUMO can be expressed in first-order logic with equality. To support the knowledge engineer, SigmaKEE has, over time integrated different reasoning systems, either via special modifications to the reasoner or in an ad-hoc manner that requires full re-processing of the full knowledge base for even trivial queries.

To overcome this problem, to create a simpler system configuration that is easier for users to install and manage, and to integrate a state-of-the-art theorem prover we have now integrated Sigma with the E theorem prover. The E distribution includes a simple server version that loads and indexes the full knowledge base, and supports interactive queries via a simple interface based on text streams. No special modifications to E were necessary for the integration, so SigmaKEE have an easy upgrade path to future versions.

08:45-10:15 Session 156: VSL Keynote Talk
Location: EI, EI 7 + EI 9, EI 10 + FH, Hörsaal 1
08:45
VSL Keynote Talk: Verification of Computer Systems with Model Checking
SPEAKER: Edmund Clarke

ABSTRACT. Model Checking is an automatic verification technique for large state transition systems. The technique has been used successfully to debug complex computer hardware and communication protocols. Now, it is beginning to be used for complex hybrid (continuous/discrete) systems as well. The major disadvantage of the technique is a phenomenon called the State Explosion Problem. This problem is impossible to avoid in worst case. However, by using sophisticated data structures and clever search algorithms, it is now possible to verify hybrid systems with astronomical numbers of states.

10:15-10:45Coffee Break
13:00-14:30Lunch Break
16:00-16:30Coffee Break