19TH INTERNATIONAL CONFERENCE ON LOGIC FOR PROGRAMMING, ARTIFICIAL INTELLIGENCE AND REASONING

LPAR Accepted Papers with Abstracts

Guy Katz (Weizmann Institute of Science)
On Module-Based Abstraction and Repair of Behavioral Programs

ABSTRACT. The number of states a program has tends to grow exponentially in the size of the code. This phenomenon, known as state explosion, hinders the verification and repair of large programs. A key technique for coping with state explosion is using abstractions, where one substitutes a program’s state graph with smaller over-approximations thereof. We show how module-based abstraction-refinement strategies can be applied to the verification of programs written in the recently proposed framework of Behavioral Programming. Further, we demonstrate how — by using a sought-after repair as a means of refining existing abstractions — these techniques can improve the scalability of existing program repair algorithms. Our findings are supported by a proof-of-concept tool.

František Blahoudek (Faculty of Informatics, Masaryk University, Brno)
Jan Strejcek (Faculty of Informatics, Masaryk University, Brno)
Mojmir Kretinsky (Faculty of Informatics, Masaryk University, Brno)
Comparison of LTL to Deterministic Rabin Automata Translators

ABSTRACT. Increasing interest in control synthesis and probabilistic model checking caused recent development of LTL to deterministic omega-automata translation. The standard approach represented by ltl2dstar tool employs Safra's construction to determinize a Buchi automaton produced by some LTL to Buchi automata translator. Since 2012, three new translators appeared, namely Rabinizer, LTL3DRA, and Rabinizer 2. They all avoid Safra's construction and work on LTL fragments only. We compare performance and automata produced by the mentioned tools, where ltl2dstar is combined with several LTL to Buchi automata translators: besides tranditionally used LTL2BA, we also consider SPOT and LTL3BA.

Stephan Schulz (TU München)
System Description: E 1.8

ABSTRACT. E is a theorem prover for full first-order logic with equality. It reduces first-order problems to clause normal form and employs a saturation algorithm based on the equational superposition calculus. E is build on shared terms with cached rewriting, and employs several innovations for efficient clause indexing. Major strengths of the system are automatic problem analysis and highly flexible search heuristics. The prover can provide verifiable proof objects and answer substitutions with very little overhead. E performs well, solving more than 69% of TPTP-5.4.0 FOF and CNF problems in automatic mode.

Tero Laitinen (Aalto University)
Tommi Junttila (Aalto University School of Science and Technology)
Ilkka Niemelä (Aalto University)
Simulating Parity Reasoning

ABSTRACT. Propositional satisfiability (SAT) solvers, which typically operate using conjunctive normal form (CNF), have been successfully applied in many domains. However, instances of circuit verification, bounded model checking, and logical cryptanalysis, can have many parity (xor) constraints which may not be handled efficiently if translated to CNF. Thus, extensions to the CNF-driven search with various stronger parity reasoning engines ranging from equivalence reasoning to incremental Gaussian elimination have been proposed. This paper studies how stronger parity reasoning techniques in the DPLL(XOR) framework can be simulated by simpler systems: resolution, unit propagation, and parity explanations.

Gabriel Scherer (INRIA)
Jan Hoffmann (Yale University)
Tracking Data-Flow with Open Closure Types

ABSTRACT. Type systems hide data that is captured by function closures in function types. In most cases this is a beneficial design that enables simplicity and compositionality. However, some applications require explicit information about the data that is captured in closures. This paper introduces open closure types, that is, function types that are decorated with type contexts. They are used to track data-flow from the environment into the function closure. A simply-typed lambda calculus is used to study the properties of the type theory of open closure types. A distinctive feature of this type theory is that an open closure type of a function can vary in different type contexts. To present an application of the type theory, it is shown that a type derivation establishes a simple non-interference property in the sense of information-flow theory. A publicly available prototype implementation of the system can be used to experiment with type derivations for example programs.

George Metcalfe (University of Bern)
Petr Cintula (Institute of Computer Science, Academy of Sciences of the Czech Republic)
Herbrand Theorems for Substructural Logics

ABSTRACT. Herbrand and Skolemization theorems are obtained for a broad family of first-order substructural logics. These logics typically lack equivalent prenex forms, a deduction theorem, and reductions of semantic consequence to satisfiability. The Herbrand and Skolemization theorems therefore take various forms, applying either to the left or right of the consequence relation, and to restricted classes of formulas.

Christoph Benzmueller (Freie Universitaet Berlin)
Thomas Raths (University Potsdam)
HOL based First-order Modal Logic Provers

ABSTRACT. First-order modal logics (FMLs) can be modeled as natural fragments of classical higher-order logic (HOL). The FMLtoHOL tool exploits this fact and it enables the application of off-the-shelf HOL provers and model finders for reasoning within FMLs. The tool bridges between the qmf-syntax for FML and the TPTP thf-syntax for HOL. It currently supports logics K, K4, D, D4, T, S4, and S5 with respect to constant, varying and cumulative domain semantics. The approach is evaluated in combination with a meta-prover for HOL which sequentially schedules various HOL reasoners. The resulting system is very competitive.

Gudmund Grov (Heriot-Watt University)
Aleks Kissinger (Oxford University)
Yuhui Lin (Heriot-Watt University)
A Graphical Language for Proof Strategies

ABSTRACT. Complex automated proof strategies are often difficult to extract, visualise, modify, and debug. Traditional tactic languages, often based on stack-based goal propagation, make it easy to write proofs that obscure the flow of goals between tactics and are fragile to minor changes in input, proof structure or changes to tactics themselves. Here, we address this by introducing a graphical language called PSGraph for writing proof strategies. Strategies are constructed visually by "wiring together" collections of tactics and evaluated by propagating goal nodes through the diagram via graph rewriting. Tactic nodes can have many output wires, and use a filtering procedure based on goal-types (predicates describing the features of a goal) to decide where best to send newly-generated sub-goals. In addition to making the flow of goal information explicit, the graphical language can fulfil the role of many tacticals using visual idioms like branching, merging, and feedback loops. We argue that this language enables development of more robust proof strategies and provide several examples, along with a prototype implementation in Isabelle.

Etienne Renault (UPMC)
Alexandre Duret-Lutz (LRDE/EPITA)
Fabrice Kordon (LIP6/UPMC)
Denis Poitrenaud (Université P. et M. Curie)
Three SCC-based Emptiness Checks for Generalized Büchi Automata

ABSTRACT. The automata-theoretic approach for the verification of linear time properties involves checking the emptiness of a Büchi automaton. However generalized Büchi automata, with multiple acceptance sets, are preferred when verifying under weak fairness hypotheses. Existing emptiness checks for which the complexity is independent of the number of acceptance sets are all based on the enumeration of Strongly Connected Components (SCCs).

In this paper, we review the state of the art SCC enumeration algorithms to study how they can be turned into emptiness checks. This leads us to define two new emptiness check algorithms, introduce new optimizations, and show that one of these can be of benefit to a classic SCCs enumeration algorithm. We have implemented all these variants to compare their relative performances and the overhead induced by the emptiness check compared to the corresponding SCCs enumeration algorithm.

Giovanni Casini (CAIR (CSIR and UKZN))
Umberto Straccia (ISTI-CNR)
Towards Rational Closure for Fuzzy Logic: The Case of Propositional G\"odel Logic

ABSTRACT. In the field of non-monotonic logics, the notion of \emph{rational closure} is acknowledged as a landmark and we are going to see whether such a construction can be adopted in the context of mathematical fuzzy logic, a so far (apparently) unexplored journey. As a first step, we will characterise rational closure in the context of Propositional G\"odel Logic.

David Harel (Weizmann Institute of Science)
Amir Kantor (Weizmann Institute of Science, Rehovot, Israel)
Guy Katz (Weizmann Institute of Science)
Relaxing Synchronization Constraints in Behavioral Programs

ABSTRACT. In behavioral programming, a program consists of separate modules called behavior threads, each representing a part of the system’s allowed, necessary or forbidden behavior. An execution of the program is a series of synchronizations between these threads, where at each synchronization point an event is selected to be carried out. As a result, the execution speed is dictated by the slowest thread. We propose an eager execution mechanism for such programs, which builds upon the realization that it is often possible to predict the outcome of a synchronization point even without waiting for slower threads to synchronize. This allows faster threads to continue running uninterrupted, whereas slower ones catch up at a later time. Consequently, eager execution brings about increased system performance, better support for the modular design of programs, and the ability to distribute programs across several machines. It also allows to apply behavioral programming to a variety of problems that were previously outside its scope. We illustrate the method by concrete examples, implemented in a behavioral programming framework in C++

Bernhard Heinemann (University of Hagen)
Characterizing Subset Spaces as Bi-Topological Structures

ABSTRACT. Subset spaces constitute a relatively new semantics for bi-modal logic. This semantics admits, in particular, a modern, computer scienceoriented view of the classic interpretation of the basic modalities in topological spaces à la McKinsey and Tarski. In this paper, we look at the relationship of both semantics from an opposite perspective as it were, by asking for a consideration of subset spaces in terms of topology and topological modal logic, respectively. Indeed, we shall finally obtain a corresponding characterization result. A third semantics of modal logic, namely the standard relational one, and the associated first-order structures, will play an important part in doing so as well.

Riku Nortje (CAIR)
Arina Britz (Meraka Institute, CSIR)
Thomas Meyer (Centre for Artificial Intelligence Research, UKZN and CSIR Meraka)
Reachability modules for the Description Logic SRIQ

ABSTRACT. In this paper we investigate module extraction for the Description Logic SRIQ. We formulate modules in terms of the reachability problem for directed hypergraphs. Using inseperability relations, we investigate the module-theoretic properties of reachability modules and show by means of an empirical evaluation that these modules have the potential of being substantially smaller than syntactic locality modules.

Luca Aceto (Reykjavik University)
Dario Della Monica (ICE-TCS, School of Computer Science, Reykjavik University)
Anna Ingolfsdottir (Reykjavik University)
Angelo Montanari (University of Udine)
Guido Sciavicco (University of Murcia)
An Algorithm for Enumerating Maximal Models of Horn Theories with an Application to Modal Logics

ABSTRACT. The fragment of propositional logic known as Horn theories plays a central role in automated reasoning. The problem of enumerating the maximal models of a Horn theory (MaxMod) has been proved to be computationally hard, unless P = NP. To the best of our knowledge, the only algorithm available for it is the one based on a brute-force approach. In this paper, we provide an algorithm for the problem of enumerating the maximal subsets of facts that do not entail a distinguished atomic proposition in a positive Horn theory (MaxNoEntail). We show that MaxMod is polynomially reducible to MaxNoEntail (and vice versa), making it possible to solve also the former problem using the proposed algorithm. Addressing MaxMod via MaxNoEntail opens, inter alia, the possibility of benefiting from the monotonicity of the notion of entailment. (The notion of model does not enjoy such a property.) We also discuss an application of MaxNoEntail to expressiveness issues for modal logics, which reveals the effectiveness of the proposed algorithm.

Filipe Casal (Instituto Superior Técnico, University of Lisbon)
João Rasga (Instituto Superior Técnico, University of Lisbon)
Revisiting the equivalence of shininess and politeness

ABSTRACT. The Nelson-Oppen method [1] allows the modular combination of quantifier free satisfiability procedures of first-order theories into a quantifier-free satisfiability procedure for the union of the theories. However, this method requires the theories to have disjoint signatures and to be stably infinite. Due to the importance of the result, several attempts to extend the method to different and wider classes of theories were made. Recently, two different extensions of the Nelson-Oppen method were proposed, where the stably infinite requirement was replaced by another condition: in [2] it was required that all but one of the theories are shiny, and in [3] it was required that, when combining two theories, one of them is polite. The relationship between shiny and polite theories was analysed in [3]. Later, a stronger notion of polite theory was proposed, see [4], in order to overcome a subtle issue with the proof of the Nelson-Oppen method in [3]. In this paper, we analyse the relationship between shiny and strongly polite theories in the one-sorted case. We show that a shiny theory with a decidable quantifier-free satisfiability problem is strongly polite and we provide two different sufficient conditions for a strongly polite theory to be shiny. Based on these results, we derive a combination method for the union of a polite theory with an arbitrary theory.

[1] G. Nelson and D. C. Oppen. Simplification by cooperating decision procedures. ACM Transactions on Programming Languages and Systems, 1(2):245–257, 1979.

[2] C. Tinelli and C. G. Zarba. Combining nonstably infinite theories. Journal of Automated Reasoning, 34(3):209–238, 2005.

[3] S. Ranise, C. Ringeissen, and C. G. Zarba. Combining data structures with nonstably infinite theories using many-sorted logic. In Proceedings of the Fifth International Workshop on Frontiers of Combining Systems (FroCoS’2005), volume 3717 of LNAI, pages 48–64, 2005.

[4] D. Jovanovic and C. Barrett. Polite theories revisited. In Proceedings of the Seventeenth International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR’2010), volume 6397 of LNCS, pages 402–416, 2010.

Jónathan Heras (School of Computing)
Ekaterina Komendantskaya (School of Computing, University of Dundee)
Moa Johansson (Dept. of Computer Science and Engineering, Chalmers University)
Ewen MacLean (School of Mathematical & Computer Sciences, Heriot-Watt University)
Proof-Pattern Recognition in ACL2

ABSTRACT. Both statistical and symbolic machine-learning methods can be applied for proof-pattern recognition and lemma discovery in theorem provers. In this paper, we present a method to combine these two approaches in the prover ACL2. ACL2 has both automated and interactive features; and this plays an important role in the method we propose. The resulting proof-pattern recognition tool ACL2(ml) gathers proof statistics, uses statistical pattern-recognition to pre-processes the data and suggests auxiliary lemmas in new proofs, by analogy with already seen examples. This paper presents the implementation of ACL2(ml) alongside theoretical descriptions of the proof-pattern recognition and lemma discovery methods involved in it.

Patrick Blackburn (Roskilde University)
Thomas Bolander (Technical University of Denmark)
Torben Braüner (Roskilde University)
Klaus Frovin Joergensen (Roskilde University)
A Seligman-style Tableau System

ABSTRACT. Proof systems for hybrid logic typically use $@$-operators to access information hidden behind modalities; this labeling approach lies at the heart of most resolution, natural deduction, and tableau systems for hybrid logic. But there is another, less well-known approach, which we have come to believe is conceptually clearer. We call this Seligman-style inference, as it was first introduced and explored by Jerry Seligman in the setting of natural deduction and sequent calculus in the late 1990s. The purpose of this paper is to introduce a Seligman-style tableau system.

The most obvious feature of Seligman-style systems is that they work with arbitrary formulas, not just formulas prefixed by $@$-operators. To achieve this in a tableau system, we introduce a rule called $\mathsf{GoTo}$ which allows us to ``jump to a named world'' on a tableau branch, thereby creating a local proof context (which we call a \textit{block}) on that branch. To the surprise of at least some of the authors (who have worked extensively on developing the labeling approach) Seligman-style inference is often much clearer. Not only is the approach more modular, but individual proofs are often more direct. We conclude with a discussion of termination and first-order hybrid logic.

Josh Berdine (Microsoft Research)
Nikolaj Bjorner (Microsoft Research)
Samin Ishtiaq (Microsoft Research)
Jael Kriener (University of Kent)
Christoph Wintersteiger (Microsoft Research)
Resourceful Reachability as HORN-LRA

ABSTRACT. The program verification tool SLAyer uses abstractions during analysis and relies on a solver for reachability to refine spurious counterexamples. In this context, we extract a reachability benchmark suite and evaluate methods for encoding reachability properties with heaps using Horn clauses over linear arithmetic. Our experimental evaluation establishes arithmetical encodings as a viable technique compared to bounded model checking over array based systems, but only given significant pre-processing and controls over how arrays are eliminated.

Alessandro Artale (Free University of Bolzano)
Roman Kontchakov (Birkbeck College)
Vladiaslav Ryzhikov (Free University of Bolzano)
Michael Zakharyaschev (Birkbeck College London)
The Complexity of Clausal Fragments of LTL

ABSTRACT. We introduce and investigate a number of fragments of propositional temporal logic LTL over the flow of time (Z, <). The fragments are defined in terms of the available temporal operators and the structure of the clausal normal form of the temporal formulas. We determine the computational complexity of the satisfiability problem for each of the fragments, which ranges from NLogSpace to PTime, NP and PSpace.

Mélanie Boudard (PRiSM, Univ. de Versailles-St-Quentin-en-Yvelines)
Olivier Hermant (CRI, MINES ParisTech)
Polarizing Double Negation Translations

ABSTRACT. Double-negation translations allow to encode and decode classical proofs in intuitionistic logic. We show that, in the cut-free fragment, we can simplify the translations and introduce fewer negations. To achieve this, we consider the polarization of the formulae and adapt those translation to the different connectives and quantifiers. We show that the embedding results still hold, using a customized version of the focused classical sequent calculus. We also prove the latter equivalent to more usual versions of the sequent calculus. This polarization process allows lighter embeddings, and sheds some light on the relationship between intuitionistic and classical connectives.

Marijn Heule (The University of Texas at Austin)
Armin Biere (Johannes Kepler University)
Blocked Clause Decomposition

ABSTRACT. We demonstrate that it is fairly easy to decompose any propositional formula into two subsets such that both can be solved by blocked clause elimination. Such a blocked clause decomposition is useful to cheaply detect backbone variables and equivalent literals. Blocked clause decompositions are especially useful when they are unbalanced, i.e., the size of one subset is much larger in size than the other one. We present algorithms and heuristics to obtain unbalanced decompositions efficiently. Our techniques have been implemented in the state-of-the-art solver Lingeling. Experiments show that the performance of Lingeling is clearly improved due to these techniques on application benchmarks of the SAT Competition 2013.

Szymon Klarman (Centre for Artificial Intelligence Research, UKZN and CSIR Meraka)
Thomas Meyer (Centre for Artificial Intelligence Research, UKZN and CSIR Meraka)
Prediction and Explanation over DL-Lite Data Streams

ABSTRACT. Stream reasoning is an emerging research area focusing on the development of reasoning techniques applicable to streams of rapidly changing, semantically enhanced data. In this paper, we consider data represented in Description Logics from the popular DL-Lite family, and study the logic foundations of prediction and explanation over DL-Lite data streams, i.e., reasoning from finite segments of streaming data to conjectures about the content of the streams in the future or in the past. We propose a novel formalization of the problem based on temporal ``past-future'' rules, grounded in Temporal Query Language. Such rules can naturally accommodate complex data association patterns, which are typically discovered through data mining processes, with logical and temporal constraints of varying expressiveness. Further, we analyse the computational complexity of reasoning with rules expressed in different fragments of the temporal language. As a result, we draw precise demarcation lines between NP-, DP- and PSpace-complete variants of our setting and, consequently, suggest relevant restrictions rendering prediction and explanation more feasible in practice.

Elvira Albert (Complutense University of Madrid)
Samir Genaim (Universidad Complutense de Madrid)
Enrique Martin-Martin (Universidad Complutense de Madrid)
May-Happen-in-Parallel Analysis for Priority-based Scheduling

ABSTRACT. A may-happen-in-parallel (MHP) analysis infers the sets of pairs of program points that may execute in parallel along a program's execution. This is an essential piece of information to infer more complex properties of concurrent programs, e.g., deadlock freeness, termination, resource consumption or data race detection can greatly benefit from the MHP relations to increase their accuracy. Previous MHP analyses have assumed a worst case scenario by adopting a simplistic (non-deterministic) task scheduler which can select any available task. While the results of the analysis for a non-deterministic scheduler are obviously sound, they can lead to an overly pessimistic result. We present a MHP analysis for an asynchronous language with prioritized tasks buffers. Priority-based scheduling is arguably the most common scheduling strategy used in the implementation of concurrent languages. The challenge is to be able to take task priorities into account at static analysis time in order to filter out unfeasible MHP pairs.

Belaïd Benhamou (Aix-Marseille University)
Dynamic symmetry breaking in answer set programming

ABSTRACT. Many research works had been done in order to define a semantics for logic programs. The well known one is the stable model semantics which selects for each program one of its canonical models. The stable models of a logic program are in a certain sens the minimal Herbrand models of its program "reducts". On the other hand, the notion of symmetry elimination had been widely used in constraint programming and shown to be useful to increase the efficiency of the associated solvers. However symmetry in non monotonic reasoning still not well studied in general. For instance Answer Set Programming (ASP) is a very known framework but only few recent works on symmetry breaking are known. Ignoring symmetry breaking in the answer set systems could make them doing redundant work and lose on their efficiency. Here we study the notion of local and global symmetry in the framework of answer set programming. We show how local symmetries of a logic program can be detected dynamically. We also give some properties that allow to eliminate theses symmetries in SAT-based answer set solvers in order to enhance their efficiency.

Chuan Jiang (Iowa State University)
Ting Zhang (Iowa State University)
Partial Backtracking in CDCL Solvers

ABSTRACT. Backtracking is a basic technique of search-based satisfiability (SAT) solvers. In order to backtrack, a SAT solver uses conflict analysis to compute a backtracking level and discards all the variable assignments made between the conflicting level and the backtracking level. We observed that, due to the branching heuristics, the solver may repeat lots of previous decisions and propagations later. In this paper, we present a new backtracking strategy, which we refer to as partial backtracking. We implemented this strategy in our solver Nigma. Using this strategy, Nigma amends the variable assignments instead of discarding them completely so that it does not backtrack as many levels as the classic strategy. Our experiments show that Nigma solves 5% more instances than the version without partial backtracking.

Cezary Kaliszyk (University of Innsbruck)
Josef Urban (Radboud University)
Lemma Mining over HOL Light

ABSTRACT. Large formal mathematical libraries consist of millions of atomic inference steps that give rise to a corresponding number of proved statements (lemmas). Anologously to the informal mathematical practice, only a tiny fraction of such statements is named and re-used in later proofs by the formal mathematicians. In this work, we suggest and implement exact criteria defining the estimated usefulness of the HOL Light lemmas for proving further theorems. We use these criteria to mine the large inference graph of all lemmas in the core HOL Light library, adding thousands of the best lemmas to the pool of named statements that can be re-used in later proofs. We then evaluate the usefulness of the new lemmas by comparing the performance of automated proving of the core HOL Light theorems with and without such added lemmas.

Tim French (The University of Western Australia)
John Mccabe-Dansted (The University of Western Australia)
Mark Reynolds (The Univeristy of Western Australia)
Verifying Temporal Properties in Real Models

ABSTRACT. Based on pioneering work of Lauchli and Leonard in the 1960s, a novel and expressive formal language, Model Expressions, for describing the compositional construction of general linear temporal structures has recently been proposed. A sub-language, Real Model Expressions, is capable of specifying models over the real flow of time but its semantics are subtly different because of the specific properties of the real numbers.

Model checking techniques have been developed for the general linear Model Expressions and it was shown that checking temporal formulas against structures described in the formal language is PSPACE-Complete and linear in the length of the model expression.

In this paper we present a model checker for temporal formulas over real-flowed models. In fact the algorithm, and so its complexity, is the same as for the general linear case.

To show that this is adequate, we use a concept of temporal bisimulations and establish that it is respected by the compositional construction method. We can then check the correctness of using the general linear model checking algorithm when applied to real model expressions with their special semantics on real-flowed structures.

Iain Whiteside (Newcastle University)
Dominik Dietrich (DFKI Bremen)
David Aspinall (University of Edinburgh)
Polar: A Framework for Proof Refactoring

ABSTRACT. We present a prototype refactoring framework based on graph rewriting and bidirectional transformations that is designed to be generic, extensible, and declarative. Our approach uses a language-independent graph meta-model to represent proof developments in a generic way. We use graph rewriting to enrich the meta-model with dependency information and to perform refactorings, which are written as declarative rewrite rules. Our framework, called POLAR, is implemented in the GRGEN graph rewriting engine.

Tahiry Rabehaja (Macquarie University)
Annabelle McIver (Macquarie University)
Georg Struth (University of Sheffield)
An Event Structure Model for Probabilistic Concurrent Kleene Algebra

ABSTRACT. We give a true-concurrent model for probabilistic concurrent Kleene algebra. The model is based on probabilistic event structures, which combines ideas from Katoen's work on probabilistic concurrency and Varacca's probabilistic prime event structures. We find that pomset-distribution inclusion is not a precongruence (as for interleaving models); we avoid this problem by giving a true-concurrent version of the well known probabilistic simulation.

David Delahaye (Cedric/Cnam/Inria, Paris)
Damien Doligez (Inria, Paris)
Frédéric Gilbert (Inria, Paris)
Pierre Halmagrand (Cedric/Cnam/Inria, Paris)
Olivier Hermant (CRI/MINES ParisTech/Inria, Fontainebleau)
Zenon Modulo: When Achilles Outruns the Tortoise using Deduction Modulo

ABSTRACT. We propose an extension of the tableau-based first order automated theorem prover Zenon to deduction modulo. The theory of deduction modulo is an extension of predicate calculus, which allows us to rewrite terms as well as propositions, and which is well suited for proof search in axiomatic theories, as it turns axioms into rewrite rules. We also present a heuristic to perform this latter step automatically, and assess our approach by providing some experimental results obtained on the benchmarks provided by the TPTP library, where this heuristic is able to prove difficult problems in set theory in particular. Finally, we describe an additional backend for Zenon that outputs proof certificates for Dedukti, which is a proof checker based on the lambda-Pi-calculus modulo.

Peter Baumgartner (National ICT Australia)
Joshua Bax (NICTA)
Proving Infinite Satisfiability

ABSTRACT. We consider the problem of automatically disproving invalid conjectures over data structures such as lists and arrays over integers, in presence of additional hypothesis over these data structures. Such invalid conjectures come up frequently in applications of automated reasoning to software verification and the analysis of data-rich state-based systems, for example. More formally, the disproving problem is to show that LIST u ARRAY u HYP does not entail a sentence CON, where LIST are the list axioms, ARRAY are array axioms, HYP are the mentioned additional hypothesis, and CON is the conjecture in question. The obvious approach to disproving is to show satisfiability of LIST u ARRAY u HYP u { not CON }, where not is the negation operator, by means of a (complete) theorem prover. Unfortunately, current theorem proving technology is of limited usefulness for that: finite model finders cannot be used because the list axioms do not admit finite models, SMT-solvers are typically incomplete on quantified formulas and face the same problem, and theorem provers based on saturation often do not terminate on satisfiable input (let alone completeness issues in presence of arithmetic background theories).

We propose a different, conceptually simple approach based on refutational theorem proving. It consists in assuming that LIST u ARRAY is satisfiable and providing templates for HYP that are guaranteed to preserve satisfiability of LIST u ARRAY u HYP. Then, disproving is done by proving that LIST u ARRAY u HYP entails not CON, i.e., that LIST u ARRAY u HYP u { CON } is unsatisfiable.

The main point of the paper is to demonstrate the practical feasibility of our approach. By means of examples, we demonstrate that our template language covers many widely used cases. We also report on our rather positive experiences on disproving sample conjectures by current theorem provers, and we compare their performance (Z3, SPASS+T and Beagle).

(The paper will be submitted in the "experimental and tool papers" category.)

Régis Blanc (EPFL)
Ashutosh Gupta (IST)
Laura Kovacs (Chalmers University of Technology)
Bernhard Kragl (TU Wien)
Tree Interpolation in Vampire

ABSTRACT. We describe new extensions of the Vampire theorem prover for computing tree interpolants. These extensions generalize Craig interpolation in Vampire, and can also be used to derive sequence interpolants. We evaluated our implementation on a large number of examples over the theory of linear integer arithmetic and integer-indexed arrays, with and without quantifiers. When compared to other methods, our experiments show that some examples could only be solved by our implementation.

Uwe Egly (Vienna University of Technology)
Florian Lonsing (Vienna University of Technology)
Magdalena Widl (Vienna University of Technology)
Long-distance Resolution: Proof Generation and Strategy Extraction in Search-based QBF Solving

ABSTRACT. Strategies (and certificates) for quantified Boolean formulas (QBFs) are of high practical relevance as they facilitate the verification of results returned by QBF solvers and the generation of solutions to problems formulated as QBFs. State of the art approaches to obtain strategies require traversing a Q-resolution proof of a QBF, which for many real-life instances is too large to handle. In this work, we consider the long-distance Q-resolution (LDQ) calculus, which allows particular tautological resolvents. We show that for a family of QBFs using LDQ allows for exponentially shorter proofs compared to Q-resolution. We further show that an approach to strategy extraction originally presented for Q-resolution proofs can also be applied to LDQ-resolution proofs. As a practical application, we consider search-based QBF solvers which are able to learn tautological clauses based on resolution and the conflict-driven clause learning method. We prove that the resolution proofs produced by these solvers correspond to proofs in the LDQ calculus and can therefore be used as input for strategy extraction algorithms. Experimental results illustrate the potential of the LDQ calculus in search-based QBF solving.

Fabio Mogavero (Università degli Studi di Napoli Federico II)
Aniello Murano (Università degli Studi di Napoli Federico II)
Loredana Sorrentino (Università degli Studi di Napoli Federico II)
On Promptness in Parity Games

ABSTRACT. Parity games are a powerful formalism for the automatic synthesis and verification of reactive systems. They are closely related to alternating omega-automata and emerge as a natural method for the solution of the mu-calculus model checking problem. Due to these strict connections, parity games are a well-established environment to describe liveness properties such as ``every request that occurs infinitely often is eventually responded''. Unfortunately, the classical form of such a condition suffers from the strong drawback that there is no bound on the effective time that separates a request from its response, i.e., responses are not promptly provided. Recently, to overcome this limitation, several parity game variants have been proposed, in which quantitative requirements are added to the classic qualitative ones.

In this paper, we make a general study of the concept of promptness in parity games that allows to put under a unique theoretical framework several of the cited variants along with new ones. Also, we describe simple polynomial reductions from all these conditions to either Buchi or parity games, which simplify all previous known procedures. In particular, they improve the complexity results of cost and bounded-cost parity games. Indeed, we provide solution algorithms showing that determining the winner of these games lies in UPTime \cap CoUPTime.

Simone Fulvio Rollini (University of Lugano)
Leonardo Alt (University of Lugano)
Grigory Fedyukovich (University of Lugano)
Antti Hyvaerinen (University of Lugano)
Natasha Sharygina (University of Lugano)
PeRIPLO: A Framework for Producing Efficient Interpolants for SAT-based Software Verification

ABSTRACT. Propositional interpolation is widely used as a means of over-approximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; it is thus unlikely that a single interpolation procedure could provide interpolants fit for all cases. This paper describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. We demonstrate the flexibility of the framework in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that logical strength and size of interpolants significantly affect the performance of verification, and that these characteristics indeed depend on the role interpolants play in the verification process.

Olivier Hermant (MINES ParisTech)
Allali Lisa (TypiCal, LIX)
Semantic A-Translations and Super-Consistency entail Classical Cut Elimination

ABSTRACT. We show that if a theory R defined by a rewrite system is super-consistent, the classical sequent calculus modulo R enjoys the cut elimination property, which was an open question. For such theories it was already known that proofs strongly normalize in natural deduction modulo R, and that cut elimination holds in the intuitionistic sequent calculus modulo R.

We first define a syntactic and a semantic version of Friedman's A-translation, showing that it preserves the structure of pre-Heyting algebra, our semantic framework. Then we relate the interpretation of a theory in the A-translated algebra and its A-translation in the original algebra. This allows to show the stability of the super-consistency criterion and the cut-elimination theorem.

Sebastian A. Mödersheim (DTU)
Thomas Gross (University of Newcastle upon Tyne)
Luca Viganò (University of Verona, Italy)
Defining Privacy is Supposed to be Easy

ABSTRACT. Formally specifying privacy goals is not trivial. The most widely used approach in formal methods is based on the static equivalence of frames in the applied pi-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. A subtle question is how we can be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal. To address this problem, we introduce in this paper a novel and declarative way to specify privacy goals, called α-β privacy, and relate it to static equivalence. This new approach is based on specifying two formulae α and β in first-order logic with Herbrand universes, where α reflects the intentionally released information and β includes the actual cryptographic (“technical”) messages the intruder can see. Then α-β privacy means that the intruder cannot derive any “non-technical” statement from β that he cannot derive from α already. We describe by a variety of examples how this notion can be used in practice. Even though α-β privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain partial tool support for α-β privacy.

Pablo Barceló (Universidad de Chile)
Gaelle Fontaine (University of Chile)
Anthony Widjaja Lin (Oxford University Computing Laboratory)
Expressive Path Queries on Graphs with Data

ABSTRACT. Typical navigational languages for querying graph databases -- such as, the conjunctive regular path queries and its extensions -- are not rich enough for expressing relevant properties of the interaction between the underlying data and the topology. Two languages have been recently proposed to overcome this problem: A powerful {\em walk logic} (WL), which corresponds to the extension of FO over graph databases with explicit quantification over paths, and a simple class of {\em regular expressions with memory} (REMs), which compare the data that appears in a path while checking if the label of such path satisfies a given regular condition.

Our main contributions are the following: First, we pinpoint the exact complexity of the evaluation problem for WL, showing that it is non-elementary even in data complexity (that is, assuming queries to be fixed). This result closes a gap in the WL literature, and at the same time, rules out any practical applicability of the language. We then move to the class of REMs, which is a formalism that suffers from the opposite drawback: although REM evaluation can be solved in exponential time -- and even in polynomial time in data complexity -- the language is rather rudimentary for expressing complex properties, such as how data in different paths compare to each other. Our second contribution is the identification of a natural extension of the class of REMs, called {\em register logic} (RL), that makes up for this lack of expressiveness while retaining an elementary complexity of evaluation. We also study the limits of tractability in data complexity for fragments of RL, and how some RL properties can be expressed in a suitable extension of WL.

Ari Saptawijaya (Centro de Inteligencia Artificial - CENTRIA - Universidade Nova de Lisboa)
Luís Moniz Pereira (Centro de Inteligencia Artificial - CENTRIA - Universidade Nova de Lisboa)
Incremental Tabling for Query-Driven Propagation of Logic Program Updates

ABSTRACT. We foster a novel implementation technique for logic program updates, that further exploits incremental tabling in logic programming (here using XSB Prolog). Propagation of fluent updates is controlled by initially keeping updates pending in the database, and then making active only those up to the timestamp of an actual query by incremental assertions. Such assertions automatically trigger system-implemented incremental tabling of fluents (and their negated complements), on the initiative of queries, with respect to a predefined upper time limit. The frame problem can then be dealt with by inspecting the table for the latest time a fluent is known to be assuredly true, i.e., the latest time it is not supervened by its negated complement, relative to the given query time. We adapt the dual program transformation for helping propagate the complement of a fluent incrementally, in order to establish whether the fluent is still true at some time point or if rather its complement is. This affords us a form of controlled, though automatic, system level truth-maintenance, up to the actual query time. Our approach reconciles high-level top-down deliberative reasoning about a query, with autonomous low-level bottom-up world reactivity to ongoing updates, and that might be adopted elsewhere for reasoning in logic.

David Aspinall (University of Edinburgh)
Ewen Denney (SGT/NASA Ames)
Christoph Lüth (DFKI)
A Semantic Basis for Proof Queries and Transformations

ABSTRACT. We extend the query language PrQL, which is designed for inspecting machine representations of proofs, to also allow transformations of these proofs. There are several novel aspects. First, PrQL natively supports hiproofs which express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven transformations enable manipulation of this structure, in particular, to transform proofs produced by interactive theorem provers into forms that assist their understanding, or that could be consumed by other tools. In this paper we motivate and define basic transformation operations, using a new abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations. We define update operations that add and remove sub-proofs, and manipulate the hierarchy to group and ungroup nodes. We show that these basic operations are well-behaved so can form a sound core for a hierarchical transformation language.

Maximilian Schlund (Technische Universitaet Muenchen)
Michal Terepeta (Technical University of Denmark)
Michael Luttenberger (Technische Universitaet Muenchen)
Putting Newton into Practice: A Solver for Polynomial Equations over Semirings

ABSTRACT. We present the first implementation of Newton's method for solving systems of equations over $\omega$-continuous semirings (based on \cite{DBLP:journals/jacm/EsparzaKL10,DBLP:conflataLuttenbergerS13}). For instance, such equation systems arise naturally in the analysis of interprocedural programs or the provenance computation for Datalog. Our implementation provides an attractive alternative for computing their exact least solution in some cases where the ascending chain condition is not met and hence, standard fixed-point iteration needs to be combined with some over-approximation (e.g., widening techniques) to terminate. We present a generic C++ library along with the main algorithms and analyze their complexity. Furthermore, we describe our implementation of the counting semiring based on semilinear sets. Finally, we discuss motivating examples as well as performance benchmarks.

Mikolas Janota (INESC-ID Lisboa)
Radu Grigore (Department of Computer Science, University of Oxford)
Joao Marques-Silva (University College Dublin)
On QBF Proofs and Preprocessing

ABSTRACT. Quantified Boolean formulas (QBFs) elegantly extend propositional satisfiability providing us with a canonical representation for PSPACE problems. To overcome the inherent complexity of QBF, significant effort has been invested in developing QBF solvers as well as the underlying proof systems. At the same time, formula preprocessing is crucial for the application of QBF solvers. This paper focuses on a missing link in currently-available technology: How to obtain a proof for a formula that had been preprocessed before it was given to a solver? The paper targets a suite of commonly-used preprocessing techniques and shows how to reconstruct proofs for them. On the negative side, the paper discusses certain limitations of the currently-used proof systems in the light of preprocessing. The presented proof-construction techniques were implemented and evaluated in the state-of-the-art QBF preprocessor bloqqer.

Alexey Ignatiev (INESC-ID Lisboa)
Antonio Morgado (INESC-ID Lisboa)
Jordi Planes (Universitat de Lleida)
Joao Marques-Silva (University College Dublin)
Maximal Falsifiability: Definitions, Algorithms, and Applications

ABSTRACT. Similarly to Maximum Satisfiability (MaxSAT), Minimum Satisfiability (MinSAT) is an optimization extension of the Boolean Satisfiability (SAT) decision problem. In recent years, both problems have been studied in terms of exact and approximation algorithms. In addition, the MaxSAT problem has been characterized in terms of Maximal Satisfiable Subsets (MSSes) and Minimal Correction Subsets (MCSes), as well as Minimal Unsatisfiable Subsets (MUSes) and hitting set dualization. However, and in contrast with MaxSAT, no such characterizations exist for MinSAT. This paper addresses this issue by casting the MinSAT problem in a more general framework. The paper studies Maximal Falsifiability (MaxFalse), the problem of computing a subset-maximal set of clauses that can be simultaneously falsified, and shows that MinSAT corresponds to the complement of a largest subset-maximal set of simultaneously falsifiable clauses. Additional contributions of the paper include novel algorithms for Maximum and Maximal Falsifiability, as well as hitting set dualization results for the MaxFalse problem. Moreover, the efficiency of the proposed algorithms is validated on practical instances.

Calin-Rares Turliuc (Imperial College London)
Nataly Maimari (Imperial College London)
Alessandra Russo (Imperial College London)
Krysia Broda (Imperial College)
On Minimality and Integrity Constraints in Probabilistic Abduction

ABSTRACT. Abduction is a type of logical inference that can be successfully combined with probabilistic reasoning. However, the role of integrity constraints has not received much attention when performing logical-probabilistic inference. The contribution of our paper is a probabilistic abductive framework based on the distribution semantics for normal logic programs that handles negation as failure and integrity constraints in the form of denials. Integrity constraints are treated as evidence from the perspective of probabilistic inference. An implementation is provided that computes alternative (non-minimal) abductive solutions, using an appropriately modified abductive system, and generating the probability of a query, for given solutions. An example application of the framework is given, where gene network topologies are abduced according to biological expert knowledge, to explain observed gene expressions. The example shows the practical utility of the proposed framework and directions of future work.

Ugo Dal Lago (University of Bologna)
Giulio Pellitta (University of Bologna)
Complexity Analysis in Presence of Control Operators and Higher-Order Functions

ABSTRACT. A polarized version of Girard, Scedrov and Scott’s Bounded Linear Logic is introduced and its normalization properties studied. Following Laurent, the logic naturally gives rise to a type system for the λμ-calculus, whose derivations reveal bounds on the time complexity of the underlying term. This is the first example of a type system for the λμ-calculus guaranteeing time complexity bounds for typable programs.

Tom Gundersen (University Paris 7)
Willem Heijltjes (University of Bath)
Michel Parigot (CNRS - University Paris 7)
A Proof of Strong Normalisation of the Typed Atomic Lambda-Calculus

ABSTRACT. The atomic lambda-calculus is a typed lambda-calculus with explicit sharing, which originates in a Curry-Howard interpretation of a deep-inference system for intuitionistic logic. It has been shown that it allows fully lazy sharing to be reproduced in a typed setting. In this paper we prove strong normalization of the typed atomic lambda-calculus using Tait's reducibility method. In addition we give a characterisation of exactly the strongly normalisable atomic lambda-terms by an intersection typing discipline.

Patrick Koopmann (University of Manchester)
Renate A. Schmidt (University of Manchester)
Forgetting Concept and Role Symbols in ALCH-Ontologies

ABSTRACT. We develop a resolution-based method for forgetting concept and role symbols in ALCH ontologies, or for computing uniform interpolants in ALCH. Uniform interpolants use only a restricted set of symbols, while preserving consequences of the original ontology involving these symbols. While recent work towards practical methods for uniform interpolation in expressive description logics focuses on forgetting concept symbols, we believe most applications would benefit from the possibility to forget both role and concept symbols. We focus on the description logic ALCH, which allows for the formalisation of role hierarchies. Our approach is based on a recently developed resolution based-calculus for forgetting concept symbols in ALC ontologies, which we extend by redundancy elimination techniques to make it practical for larger ontologies. Experiments on ALCH fragments of real life ontologies suggest that our method is applicable in a lot of real-life applications.

Anton Belov (University College Dublin)
António Morgado (University College Dublin)
Joao Marques-Silva (University College Dublin)
SAT Preprocessing for MaxSAT

ABSTRACT. State-of-the-art algorithms for industrial instances of MaxSAT prob- lem rely on iterative calls to a SAT solver. Preprocessing is crucial for the acceler- ation of SAT solving, and the key preprocessing techniques rely on the application of resolution and subsumption elimination. Additionally, satisfiability-preserving clause elimination procedures are often used. Since MaxSAT computation typi- cally involves a large number of SAT calls, we are interested in whether an input instance to MaxSAT problem can be preprocessed up-front, i.e. prior to running the MaxSAT solver, rather than (or, in addition to) during each iterative SAT solver call. The key requirement in this setting is that the preprocessing has to be sound, i.e. so that the solution can be reconstructed correctly and efficiently after the execution of a MaxSAT algorithm on the preprocessed instance. While, as we demonstrate in this paper, certain clause elimination procedures are sound for MaxSAT, it is well-known that this is not the case for the resolution and the subsumption elimination. In this paper we show how to adapt these preprocessing techniques to MaxSAT. To achieve this we recast the MaxSAT problem in a re- cently introduced labelled-CNF framework, and show that within the framework the preprocessing techniques can be applied soundly. Furthermore, we show that MaxSAT algorithms restated in the framework have a natural implementation on top of an incremental SAT solver. We evaluate the prototype implementation of a MaxSAT algorithm WMSU1 in this setting, demonstrate the effectiveness of preprocessing, and show overall improvement with respect to non-incremental versions of the algorithm on some classes of problems.

Hira Taqdees (SEECS,NUST)
Osman Hasan (SEECS,NUST)
Formalization of Laplace Transform using the Multivariable Calculus Theory of HOL-Light

ABSTRACT. Algebraic techniques based on Laplace transform are widely used for solving differential equations and evaluating transfer of signals while analyzing physical aspects of many safety-critical systems. To felicitate formal analysis of these systems, we present the formalization of Laplace transform using the multivariable calculus theories of HOL Light. In particular, we use integral, differential, transcendental and topological theories of multivariable calculus to formally define Laplace transform in higher-order logic and reason about the correctness of Laplace transform properties, such as existence, linearity, frequency shifting and differentiation and integration in time domain. In order to demonstrate the practical effectiveness of this formalization, we use it to formally verify the transfer function of Linear Transfer Converter (LTC) circuit, which is a commonly used electrical circuit.

Krishnendu Chatterjee (IST Austria)
Vojtech Forejt (University of Oxford)
Dominik Wojtczak (University of Liverpool)
Multi-objective discounted reward verification in graphs and MDPs

ABSTRACT. We study the problem of achieving a given value in Markov decision processes (MDPs) with several independent discounted reward objectives. We consider a generalised version of discounted reward objectives, in which the amount of discounting depends on the states visited and on the objective. This definition extends the usual definition of discounted reward, and allows to capture the systems in which the value of different commodities diminish at different and variable rates.

We establish results for two prominent subclasses of the problem, namely state-discount models where the discount factors are only dependent on the state of the MDP (and independent of the objective), and reward-discount models where they are only dependent on the objective (but not on the state of the MDP). For the state-discount models we use a straightforward reduction to expected total reward objectives and show that the problem whether a value is achievable can be solved in polynomial time. For the reward-discount model we show that memory and randomisation of the strategies are required, but nevertheless that the problem is decidable and it is sufficient to consider strategies which after a certain number of steps behave in a memoryless way.

For the general case, we show that when restricted to graphs (i.e. MDPs with no randomisation), pure strategies and discount factors of the form $1/n$ where $n$ is an integer, the problem is in PSPACE and finite memory suffices for achieving a given value. We also show that when the discount factors are not of the form $1/n$, the memory required by a strategy can be infinite.

Luís Cruz-Filipe (Escola Superior Náutica Infante Dom Henrique)
Rita Henriques (Faculdade de Ci^encias da Universidade de Lisboa)
Isabel Nunes (Faculty of Sciences, Lisbon University)
Description logics, rules and multi-context systems

ABSTRACT. The combination of rules and ontologies has been a fertile topic of research in the last years, with the proposal of several different systems that achieve this goal. In this paper, we look at two of these formalisms, Mdl-programs and multi-context systems, which address different aspects of this combination, and include different, incomparable programming constructs. Despite this, we show that every Mdl-program can be transformed in a multi-context system, and this transformation relates the different semantics for each paradigm in a natural way. As an application, we show how a set of design patterns for multi-context systems can be obtained from previous work on Mdl-programs.

Shachar Itzhaky (Tel Aviv University)
Sumit Gulwani (Microsoft Research)
Neil Immerman (UMass Amherst)
Mooly Sagiv (Tel Aviv University)
Solving Geometry Problems using a Combination of Symbolic and Numerical Reasoning

ABSTRACT. Abstract. We describe a framework that combines deductive, numeric, and inductive reasoning to solve geometric problems. Applications include the generation of geometric models and animations, as well as problem solving in the context of intelligent tutoring systems. Our novel methodology uses (i) deductive reasoning to generate a partial program from logical constraints, (ii) numerical methods to evaluate the partial program, thus creating geometric models which are solutions to the original problem, and (iii) inductive synthesis to read off new constraints that are then applied to one more round of deductive reasoning leading to the desired deterministic program. By the combination of methods we were able to solve problems that each of the methods was not able to solve by itself. The number of nondeterministic choices in a partial program provides a measure of how close a problem is to being solved and can thus be used in the educational context for grading and providing hints. We have successfully evaluated our methodology on 18 Scholastic Aptitude Test geometry problems, and 11 ruler/compass-based geometry construction problems. Our tool solved these problems using an average of a few seconds per problem.

Szymon Klarman (Centre for Artificial Intelligence Research, UKZN and CSIR Meraka)
Practical Querying of Temporal Data via OWL 2 QL and SQL:2011

ABSTRACT. We develop a practical approach to querying temporal data stored in temporal SQL:2011 databases through the semantic layer of OWL 2 QL ontologies. An interval-based temporal query language (TQL), which we propose for this task, is defined via naturally characterizable combinations of temporal logic with conjunctive queries. This foundation warrants well-defined semantics and formal properties of TQL querying. In particular, we show that under certain mild restrictions the data complexity of query answering remains in AC$^0$, i.e., as in the usual, nontemporal case. On the practical side, TQL is tailored specifically to offer maximum expressivity while preserving the possibility of reusing standard first-order rewriting techniques and tools for OWL 2 QL.

Manuel Lamotte-Schubert (Max Planck Institute for Informatics)
Christoph Weidenbach (Max Planck Institute for Informatics)
BDI: A New Decidable First-order Clause Class

ABSTRACT. BDI (Bounded Depth Increase) is a new decidable first-order clause class. It strictly includes known classes such as PVD. The arity of function and predicate symbols as well as the shape of atoms is not restricted in BDI. Instead the shape of "cycles" in resolution inferences is restricted such that the depth of generated clauses may increase but is still finitely bound. The BDI class is motivated by real world problems where function terms are used to represent record structures.

We show that the hyper-resolution calculus modulo redundancy elimination terminates on BDI clause sets. Employing this result to the ordered resolution calculus, we can also prove termination of ordered resolution on BDI, yielding a more efficient decision procedure.

Francesco Alberti (Università della Svizzera Italiana)
Silvio Ghilardi (Dipartimento di Matematica, Università degli Studi di Milano)
Natasha Sharygina (Universita' della Svizzera Italiana)
Acceleration-based safety decision procedure for programs with arrays

ABSTRACT. Reachability analysis of programs with arrays is a well-known challenging problem and many existing approaches are bound to incomplete solutions. In this paper we identify a class of programs with arrays for which safety is fully decidable. The completeness result we provide is built up from acceleration techniques for arrays and the decision procedure for the Array Property Fragment class.

Patrick Doherty (Linköping University)
Fredrik Heintz (Linköping University)
Jonas Kvarnström (Linköping University)
Robotics, Temporal Logic and Stream Reasoning

ABSTRACT. The area of AI robotics offers a set of fundamentally challenging problems when attempting to integrate logical reasoning functionality in such systems. The problems arise in part from the high degree of complexity in such architectures which include realtime behaviour, distribution, concurrency, various data latencies in operation and several levels of abstraction. For logic to work practically in such systems, traditional theorem proving, although important, is often not feasible for many of the functions of reasoning in such systems. In this article, we present a number of novel approaches to such reasoning functionality based on the use of temporal logic. The functionalities covered include, automated planning, stream-based reasoning and execution monitoring.

Abhijeet Mohapatra (Stanford University)
Michael Genesereth (Stanford University)
An Incremental Algorithm to Optimally Maintain Aggregate Views

ABSTRACT. We propose an algorithm called CReaM to incrementally maintain materialized aggregate views with user-defined aggregates in response to changes to the database tables from which the views are derived. When the physical design of the underlying database is optimized, we show that the time taken by CReaM to update an aggregate view is optimal.

Gabriel Aranda (Universidad Complutense de Madrid)
Susana Nieva (UCM)
Fernando Saenz-Perez (Grupo de programación declarativa (GPD). Departamento de Ingeniería del Software e Inteligencia Artificial. Universidad Complutense de madrid)
Jaime Sánchez-Hernández (Universidad Complutense Madrid)
Incorporating Hypothetical Views and Extended Recursion into SQL Database Systems

ABSTRACT. Current database systems supporting recursive SQL impose restrictions on queries such as linearity, and do not implement mutual recursion. In a previous work we presented the language and prototype R-SQL to overcome those drawbacks. Now we introduce a formalization and an implementation of the database system HR-SQL that, in addition to extended recursion, incorporates hypothetical reasoning in a novel way which cannot be found in any other SQL system, allowing both positive and negative assumptions. The formalization extends the fixpoint semantics of R-SQL. The implementation improves the eciency of the previous prototype and is integrated in a commercial DBMS.

Martin Baláž (Comenius University)
Jozef Frtús (Comenius University)
Martin Homola (Comenius University, Bratislava, Slovakia)
Conflict Resolution in Structured Argumentation

ABSTRACT. While several interesting argumentation-based semantics for defeasible logic programs have been proposed, to our best knowledge, none of these approaches is able to fully handle the closure under strict rules in a sufficient manner: they are either not closed, or they use workarounds such as transposition of rules which violates the desired directionality of logic programming rules.

We propose a novel argumentation-based semantics, in which the status of arguments is determined by attacks between newly introduced conflict resolutions instead of attacks between arguments. We show that the semantics is closed w.r.t. strict rules and respects the directionality of inference rules, as well as other desired properties previously published in the literature.

Jiefei Ma (Imperial College London)
Rob Miller (University College London)
Leora Morgenstern (Leidos Corporation)
Theodore Patkos (Computer Science Department, University of Crete)
An Epistemic Event Calculus for ASP-based Reasoning About Knowledge of the Past, Present and Future

ABSTRACT. We present a generalisation of the Event Calculus, specified in classical logic and implemented in ASP, that facilitates reasoning about non-binary-valued fluents in domains with non-deterministic, triggered, concurrent, and possibly conflicting actions. We show via a case study how this framework may be used as a basis for a "possible-worlds" style approach to epistemic and causal reasoning in a narrative setting. In this framework an agent may gain knowledge about both fluent values and action occurrences through sensing actions, lose knowledge via non-deterministic actions, and represent plans that include conditional actions whose conditions may be initially unknown.

Margus Veanes (Microsoft Research)
Nikolaj Bjorner (Microsoft Research)
Lev Nachmanson (Microsoft Research)
Sergey Bereg (The University of Texas at Dallas)
Effectively Monadic Predicates

ABSTRACT. Monadic predicates play a prominent role in many decidable cases, including decision procedures for symbolic automata. We are here interested in discovering whether a formula can be rewritten into a Boolean combination of monadic predicates. Our setting is quantifier-free formulas over a decidable background theory, such as arithmetic and we here develop a semi-decision procedure for extracting a monadic decomposition of a formula when it exists.

Nikolaj Bjorner (Microsoft Research)
Arie Gurfinkel (Software Engineering Institute, Carnegie Mellon University)
Konstantin Korovin (Manchester University)
Ori Lahav (Tel Aviv University)
Instantiations, Zippers and EPR Interpolation

ABSTRACT. This paper describes interpolation procedures for EPR. In principle, interpolation for EPR is simple: It is a special case of first-order interpolation. In practice, we would like procedures that take advantage of properties of EPR: EPR admits finite models and those models are sometimes possible to describe very compactly. Inspired by procedures for propositional logic that use models and cores, but not proofs, we develop a procedure for EPR that uses just models and cores.