ABSTRACT. Gossip protocols deal with a group of communicating agents, each holding a private information, and aim at arriving at a situation in which all the agents know each other secrets. Distributed epistemic gossip protocols are particularly simple distributed programs that use as guards formulas from an epistemic logic. Recently, the implementability of these distributed protocols and the problems of their partial correctness and termination were shown to be decidable, but decidability of their fair termination was left open. We show that for any distributed epistemic gossip protocol, its fair (in respect to agents or rules) termination is decidable.
ABSTRACT. In this paper we show that the fully boxed fragment of first-order LTL (which contains only the temporal connective [] (Box) and every atom, every operation but [], and every quantifier is guarded with []) is not recursively enumerable. To obtain incompleteness for this weak fragment we associate to its formulas a corresponding Gödel logic, G_down (an infinitely valued Gödel logic with only one accumulation point of truth values at 0). By embedding the theory of two equivalence relations on finite sets, we show with one proof that the monadic fragments of G_down and G_up (infinitely valued Gödel logic with only one accumulation point of truth values at 1, also the intersection of all finitely valued Gödel logics) with two predicate symbols are not recursively enumerable.
We use arguments from first-order LTL to demonstrate that the complement of the 1-satisfiable formulas of G_down is not recursively enumerable. This is in contrast to the fact that the same fragment for G_up is decidable.
Thomas Espitau (Universite Pierre et Marie Curie)
Benjamin Gregoire (INRIA Sophia-Antipolis)
Justin Hsu (University of Pennsylvania)
Pierre-Yves Strub (Ecole Polytechnique)
ABSTRACT. Proof by coupling is a powerful proof technique for proving probabilistic properties like stochastic dominance and rapid mixing of Markov chains, arising in a variety of fields. Couplings have also proved to be a useful abstraction for formal reasoning about relational properties of probabilistic programs, in particular for modeling reduction-based cryptographic proofs and for verifying differential privacy. In this paper, we demonstrate that probabilistic couplings can be used for verifying non-relational properties, such as uniformity and conditional independence. Specifically, we demonstrate that the program logic pRHL---whose proofs correspond to proofs by coupling---can be used in combination with self-composition for proving such facts. We formally verify our main examples using the EasyCrypt proof assistant.
ABSTRACT. We present new results on a constraint satisfaction problem arising from the inference of resource types in automatic amortized analysis for object-oriented programs by Rodriguez and Hofmann. These constraints are essentially linear inequalities between infinite lists of nonnegative rational numbers which are added and compared pointwise. We study the question of satisfiability of a system of such constraints in two variants with significantly different complexity. We show that in its general form (which is the original formulation presented by Hofmann and Rodriguez at LPAR 2012) this satisfiability problem is hard for the famous Skolem-Mahler-Lech problem whose decidability status is still open but which is at least NP-hard. We then identify a subcase of the problem that still covers all instances arising from type inference in the aforementioned amortized analysis and show decidability of satisfiability in polynomial time by a reduction to linear programming. We further give a classification of the growth rates of satisfiable systems in this format and are now able to draw conclusions about resource bounds for programs that involve lists and also arbitrary data structures if we make the additional restriction that their resource annotations are generated by an infinite list (rather than an infinite tree as in the most general case). Decidability of the tree case which was also part of the original formulation by Hofmann and Rodriguez still remains an open problem.
Fabio Martinelli (IIT-CNR)
Ilaria Matteucci (IIT-CNR)
Francesco Santini (Università di Perugia)
ABSTRACT. Partial Model-Checking (PMC) is an efficient tool to reduce the combinatorial explosion of a state-space, arising in the verification of loosely-coupled software systems. At the same time, it is useful to consider quantitative temporal-modalities, where a weight represents a level of satisfaction for their verification. This allows for checking whether satisfying such a desired modality is too much costly, by comparing the final score to a given threshold. We stir these two ingredients together in order to provide a Quantitative PMC function (QPMC), based on the algebraic structure of semirings. We design a method to extract part of the weight during QPMC, with the purpose to avoid the evaluation of a modality as soon as the threshold is crossed. Moreover, we extend classical heuristics to be quantitative, and we investigate the complexity of QPMC.
Alexandre Duret-Lutz (LRDE/EPITA)
Mikuláš Klokočka (Faculty of Informatics, Masaryk University)
Mojmir Kretinsky (Faculty of Informatics, Masaryk University Brno)
Jan Strejcek (Faculty of Informatics, Masaryk University)
ABSTRACT. We present a tool that transforms nondeterministic omega-automata to semi-deterministic omega-automata. The tool Seminator and accepts transition-based generalized Büchi automata (TGBA) as input and produces automata with two kinds of semi-determinism. The implemented procedure performs degeneralization and semi-determinization simultaneously and employs several other optimizations. We experimentaly evaluate Seminator in the context of LTL to semi-deterministic automata translation.
Evgenia Ternovska (Simon Fraser University)
David Mitchell (Simon Fraser University)
ABSTRACT. Solving complex problems can involve non-trivial combinations of distinct knowledge bases and problem solvers. The Algebra of Modular Systems is a knowledge representation framework that provides a method for formally specifying such systems in purely semantic terms. Many practical systems based on expressive formalisms solve the model expansion task. In this paper, we construct a solver for the model expansion task for a complex modular systems from an expression in the algebra and black-box propagators or solvers for the primitive modules. To this end, we define a general notion of propagators equipped with an explanation mechanism, an extension of the algebra to propagators, and a lazy conflict-driven learning algorithm. The result is a framework for seamlessly combining solving technology from different domains to produce a solver for a combined system.
Lakhdar Sais (CRIL - CNRS, Université d'Artois)
Yakoub Salhi (CRIL - CNRS, Université d'Artois)
ABSTRACT. In this paper, we propose a new approach for defining tractable classes in propositional satisfiability problem (in short SAT). The basic idea consists in transforming SAT instances into instances of the problem of finding a maximum independent set. In this context, we only consider propositional formul\ae{} in conjunctive normal form where each clause is either positive or binary negative. Tractable classes are obtained from existing polynomial time algorithms of the problem of finding a maximum independent set in the case of different graph classes, such as claw-free graphs and perfect graphs. We show, in particular, that the pigeonhole principle belongs to one of the defined tractable classes. Furthermore, we propose a characterization of the minimal models in the largest considered fragment based on the maximum independent set problem.
ABSTRACT. We introduce a new proof-theoretic framework which enhances the expressive power of bunched sequents by extending them with a hypersequent structure. We prove a general cut-elimination theorem that applies to bunched hypersequent calculi satisfying general rule conditions and then adapt the methods of transforming axioms into rules to provide cutfree bunched hypersequent calculi for a large class of logics extending the distributive commutative Full Lambek calculus DFLe and Bunched Implication logic BI. The methodology is then used to formulate new logics equipped with a cutfree calculus in the vicinity of Boolean BI.
Peter Schneider-Kamp (University of Southern Denmark)
ABSTRACT. In 2016, Heule, Kullmann and Marek solved the Boolean Pythagorean Triples problem: is there a binary coloring of the natural numbers such that every Pythagorean triple contains an element of each color? By encoding a finite portion of this problem as a propositional formula and showing its unsatisfiability, they established that such a coloring does not exist. Subsequently, this answer was verified by a correct-by-construction checker extracted from a Coq formalization, which was able to reproduce the original proof. However, none of these works address the question of formally addressing the relationship between the propositional formula that was constructed and the mathematical problem being considered.
In this work, we formalize the Boolean Pythagorean Triples problem in Coq. We recursively define a family of propositional formulas, parameterized on a natural number n, and show that unsatisfiability of this formula for any particular $n$ implies that there does not exist a solution to the problem. We formalize the mathematical argument behind the simplification step in the original proof of unsatisfiability, thereby reducing our dependency on previous work.
Aude Maignan (Laboratoire Jean Kuntzman, UPMF)
ABSTRACT. We propose a rewriting modulo approach to define parallel transformation of graph structures in the context of overlapping graph rewrite systems. As parallel transformation of a graph does not produce a graph in general, we propose first some sufficient conditions that ensure the closure of graphs by parallel rewrite relations. Then we mainly introduce and discuss two parallel rewrite relations over graphs. The first relation is functional and thus deterministic, the other one is not functional in general. We propose sufficient conditions under which the second relation is deterministic.
ABSTRACT. There exist powerful techniques to infer upper bounds on the innermost runtime complexity of term rewrite systems (TRSs), i.e., on the lengths of rewrite sequences that follow an innermost evaluation strategy. However, the techniques to analyze the (full) runtime complexity of TRSs are substantially weaker. In this paper, we present a sufficient criterion to ensure that the runtime complexity of a TRS coincides with its innermost runtime complexity. This criterion can easily be checked automatically and it allows to use all techniques and tools for innermost runtime complexity in order to analyze (full) runtime complexity. By extensive experiments with an implementation of our results in the tool AProVE, we show that this improves the state of the art of automated complexity analysis significantly.
Cezary Kaliszyk (University of Innsbruck)
Josef Urban (Czech Technical University in Prague)
ABSTRACT. Techniques combining machine learning with translation to automated reasoning have recently become an important proof assistant component. Such “hammer” techniques complement traditional proof assistant automation as implemented by tactics and decision procedures. In this paper we present a unified proof assistant automation approach which attempts to automate the selection of appropriate tactics and tactic-sequences combined with an optimized small-scale hammering approach. We implement the technique as a tactic-level automation for HOL4: TacticToe. It implements a modified A*-algorithm directly in HOL4 that explores different tactic-level proof paths, guiding their selection by learning from a large number of previous tactic-level proofs. Unlike the existing hammer methods, TacticToe avoids translation to FOL, working directly on the HOL level. By combining tactic prediction and premise selection, TacticToe is able to re-prove 39% of 7902 HOL4 theorems in 5 seconds whereas the best single HOL(y)Hammer strategy solves 32% in the same amount of time.
Angelo Montanari (University of Udine)
Mark Reynolds (The Univeristy of Western Australia)
ABSTRACT. Linear Temporal Logic (LTL) is a de-facto standard formalism for expressing properties of systems and temporal constraints in Formal Verification, Artificial Intelligence, and other areas. The problem of LTL satisfiability is thus prominently important to check the consistency of these temporal specifications. Although adding past operators to LTL does not increase its expressive power, recently the interest for explicitly handling the past in temporal logics has increased because of the clarity and succinctness that those operators provide. In this work, a recently proposed one-pass tree-shaped tableau system for LTL is extended to support past operators. The modularity of the required changes provides evidence for the claimed ease of extensibility of this tableau system.
Alexander Steen (Freie Universität Berlin)
Christoph Benzmüller (Freie Universität Berlin)
ABSTRACT. We present a procedure for algorithmically embedding problems formulated in higher-order modal logic into classical higher-order logic. The procedure was implemented as a stand-alone tool and can be used as a preprocessor for turning TPTP THF-compliant theorem provers into provers for various modal logics. The choice of the concrete modal logic is thereby specified within the problem as a meta-logical statement. This specification format as well as the underlying semantics parameters are discussed, and the implementation and the operation of the tool are outlined. By combining our tool with one or more THF-compliant theorem provers we accomplish the most widely applicable modal logic theorem prover available to date, i.e. no other available prover covers more variants of propositional and quantified modal logics. Despite this generality, our approach remains competitive, at least for quantified modal logics, as our experiments demonstrate.
ABSTRACT. We design a new theory of higher order functions that is well-suited for the complexity analysis of a standard higher order functional language à la ML. We manage to express the interpretation of a given program in terms of a least fixpoint and we show that when restricted to functions bounded by higher order polynomials, they characterize exactly classes of tractable functions known as Basic Feasible Functions at any order.
Juha Kontinen (University of Helsinki)
Sebastian Link (University of Auckland)
ABSTRACT. Inclusion dependencies are one of the most important database constraints. In isolation their finite and unrestricted implication problems coincide, are finitely axiomatizable, PSPACE-complete, and fixed-parameter tractable in their arity. In contrast, finite and unrestricted implication problems for the combined class of functional and inclusion dependencies deviate from one another and are each undecidable. The same holds true for the class of embedded multivalued dependencies. An important embedded tractable fragment of embedded multivalued dependencies are independence atoms. These stipulate independence between two attribute sets in the sense that for every two tuples there is a third tuple that agrees with the first tuple on the first attribute set and with the second tuple on the second attribute set. For independence atoms, their finite and unrestricted implication problems coincide, are finitely axiomatizable, and decidable in cubic time. In this article, we study the implication problems of the combined class of independence atoms and inclusion dependencies. We show that their finite and unrestricted implication problems coincide, are finitely axiomatizable, PSPACE-complete, and fixed-parameter tractable in their arity. Hence, significant expressivity is gained without sacrificing any of the desirable properties that inclusion dependencies have in isolation. Finally, we establish an efficient condition that is sufficient for independence atoms and inclusion dependencies not to interact. The condition ensures that we can apply known algorithms for deciding implication of the individual classes of independence atoms and inclusion dependencies, respectively, to decide implication for an input that combines both individual classes.
ABSTRACT. Context-free language reachability (CFL-R) is a fundamental solving vehicle for computing essential compiler optimisations and static program analyses. Unfortunately, solvers for CFL-R encounter both inherently expensive problem formulations and frequent alterations to the underlying formalism. As such, tool designers are forced to create custom-tailored implementations with long development times and limited reusability. A better framework is crucial to facilitate research and development in CFL-R.
In this work we present Cauliflower, a CFL-R solver generator, that creates parallel executable C++ code from an input CFL-R rule-based specification. With Cauliflower, developers create working tools rapidly, avoiding lengthy and error-prone manual implementations. Cauliflower's domain-specific language provides semantic extension including reversal, branching, disconnection and templating. In practical experiments, Cauliflower achieves an average speedup of 1.8x compared with the best general purpose tools, and matches the performance of application-specific tools on many benchmarks.
ABSTRACT. Incorporating extensional equality into a dependent intensional type system such as the Calculus of Constructions provides with stronger type-checking capabilities and makes the proof development closer to intuition. Since strong forms of extensionality lead to undecidable type-checking, a good trade-off is to extend intensional equality with a decidable first-order theory T, as done in CoqMT, which uses matching modulo T for the weak and strong elimination rules, we call these rules T-elimination. So far, type-checking in CoqMT is known to be decidable in presence of a cumulative hierarchy of universes and weak T-elimination. Further, it has been shown by Wang with a formal proof in Coq that consistency is preserved in presence of weak and strong elimination rules, which actually implies consistency in presence of weak and strong T-elimination rules since T is already present in the conversion rule of the calculus.
We justify here CoqMT's type-checking algorithm by showing strong normalization as well as the Church-Rosser property of beta-reductions augmented with CoqMT's weak and strong T-elimination rules. This therefore concludes successfully the meta-theoretical study of CoqMT.
ABSTRACT. The deep inference presentation of multiplicative exponential linear logic (MELL) benefits from a rich combinatoric analysis with many more proofs in comparison to its sequent calculus presentation. In the deep inference setting, all the sequent calculus proofs are preserved. Moreover, many other proofs become available, and some of these proofs are much shorter. However, proof search in deep inference is subject to a greater nondeterminism, and this nondeterminism constitutes a bottleneck for applications. To this end, we address the problem of reducing nondeterminism in the deep inference presentation of MELL by refining and extending our technique that has been previously applied to multiplicative linear logic and classical logic. We show that, besides the nondeterminism in commutative contexts, the nondeterminism in exponential contexts can be reduced in a proof theoretically clean manner. We validate the improvement in accessing the shorter proofs by experiments with our implementation.
Rody Kersten (Carnegie Mellon University Silicon Valley)
Philipp Ruemmer (Uppsala University, Department of Information Technology)
Martin Schäf (SRI International)
ABSTRACT. Heap and data structures represent one of the biggest challenges when applying model checking to the analysis of software programs: in order to verify (unbounded) safety of a program, it is typically necessary to formulate quantified inductive invariants that state properties about an unbounded number of heap locations. Methods like Craig interpolation, which are commonly used to infer invariants in model checking, are often ineffective when heap is involved. To address this challenge, we introduce a set of new proof and program transformation rules for verifying object-oriented programs with the help of space invariants, which (implicitly) give rise to quantified invariants. Leveraging advances in Horn solving, we show how space invariants can be derived fully automatically, and how the framework can be used to effectively verify safety of Java programs.
Xavier Thirioux (IRIT- CNRS)
Pierre-Loic Garoche (ONERA)
Arie Gurfinkel (University of Waterloo)
Hamza Bourbouh (ENSEEIHT/IRIT)
Christophe Garion (ISAE-SUPAERO/DISC)
ABSTRACT. Stateflow is a widely used modeling framework for embedded and cyber-physical systems where control software interact with physical processes. This is a highly complex language with no formal semantics. In this work, we present a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully com- pile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of Stateflow. We propose the compilation process using continuation- passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink/Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in the safety verification of industrial scale Stateflow models.
Martin Suda (Technische Universität Wien)
Martina Seidl (Johannes Kepler University Linz)
Hans Tompits (Vienna University of Technology)
Armin Biere (Johannes Kepler University Linz)
ABSTRACT. Blocked clauses provide the basis for powerful reasoning techniques used in SAT, QBF, and DQBF solving. Their definition, which relies on a simple syntactic criterion, guarantees that they are both redundant and easy to find. In this paper, we lift the notion of blocked clauses to first-order logic. We introduce two types of blocked clauses, one for first-order logic with equality and the other for first-order logic without equality, and prove their redundancy. In addition, we give a polynomial algorithm for checking whether a clause is blocked. Based on our new notions of blocking, we implemented a novel first-order preprocessing tool. Our experiments showed that many first-order problems in the TPTP library contain a large number of blocked clauses. Moreover, we observed that their elimination can improve the performance of modern theorem provers, especially on satisfiable problem instances.
ABSTRACT. It is known that one can extract Craig interpolants from so-called local derivations. An interpolant extracted from such a derivation is a boolean combination of formulas occurring in the derivation. However, standard complete proof systems, such as superposition, for theories having the interpolation property are not necessarily complete for local proofs.
In this paper we investigate interpolant extraction from non-local proofs in the superposition calculus and prove a number of general results about interpolant extraction and complexity of extracted interpolants. In particular, we prove that the number of quantifier alternations in first-order interpolants of formulas without quantifier alternations is unbounded. This result has far-reaching consequences for using local proofs as a foundation for interpolating proof systems - any such proof system should deal with formulas of arbitrary quantifier complexity.
To search for alternatives for interpolating proof systems, we consider several variations on interpolation and local proofs. Namely, we give an algorithm for building interpolants from resolution refutations in logic without equality and discuss additional constraints when this approach can be also used for logic with equality. We finally propose a new direction related to interpolation via local proofs in first-order theories.
Carlos Olarte (Pontificia Universidad Javeriana Cali)
Elaine Pimentel (UFRN)
ABSTRACT. It is well known that context dependent logical rules can be problematic both to implement and reason about. This is one of the factors driving the quest for better behaved, i.e., local, logical systems. In this work we propose such a local system for linear logic (LL) based on linear nested sequents (LNS). Relying on that system, we propose a general framework for modularly describing systems combining, coherently, substructural behaviors inherited from LL and simply dependent multimodalities. This class of systems includes linear, elementary, affine, bounded and subexponential linear logics and extensions of multiplicative additive linear logic (MALL) with normal modalities, as well as general combinations of them. The resulting LNS systems can be adequately encoded into (plain) linear logic, supporting the notion that LL is, in fact, a “universal framework” for the specification of logical systems. From the theoretical point of view, we give a uniform presentation of linear logics featuring different axioms for the modal operators. From the practical point of view, our results lead to a generic way of constructing theorem provers for different logics, all of them based on the same grounds. This opens the possibility of using the same logical framework for reasoning about all such logical systems.
Alexander Maringele (University of Innsbruck)
Georg Moser (University of Innsbruck)
ABSTRACT. In this tool paper we describe a variation of Nintendo's "Super Mario World", dubbed "Super Formula World", that creates its game maps based on an input quantified Boolean formula. Thus in "Super Formula World", Mario, the plumber not only saves his girlfriend princess Peach, but also acts as a QBF solver as a side. The game is implemented in Java and platform independent.
Our implementation rests on abstract frameworks by Aloupis et al. that allow the analysis of the computational complexity of a variety of famous video games. In particular it is a straightforward consequence of these results to provide a reduction from QSAT to "Super Mario World". By specifying this reduction in a precise way we obtain the core engine of "Super Formula World". Similarly "Super Formula World" implements a reduction from SAT to "Super Mario Bros.", yielding significantly simpler game worlds.
Geoffrey Irving (Google Research)
Christian Szegedy (Google Research)
Cezary Kaliszyk (University of Innsbruck)
ABSTRACT. Application of deep learning techniques lies at the heart of several significant AI advances and breakthroughs in the past years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of go.
Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification.
Here we suggest deep learning based guidance to the proof process of E Prover. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them for selecting processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved.
Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 6.67% of the first-order logic translations of the Mizar Mathematical Library theorem statements that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%.
Fred Freitas (Universidade Federal de Pernambuco (UFPE))
Jens Otten (University of Oslo)
ABSTRACT. In this paper, we introduce RACCOON, a reasoner based on the connection calculus ALC theta-CM for the description logic (DL) ALC. We describe briefly the calculus, while RACCOON’s internal data structures and functioning are presented in detail. Currently, RACCOON carries out consistency checks, and could be run online; its code is also publicly available. Besides the calculus and the system description, promising results of a comparison between the new system and other ALC reasoners are shown and discussed.
Grigory Fedyukovich (University of Washington Computer Science & Engineering)
ABSTRACT. Simultaneous occurrences of multiple recurrence relations in a system of non-linear constrained Horn clauses are crucial for proving its satisfiability. A solution of such system is often inexpressible in the constraint language. We propose to synchronize recurrent computations, thus increasing the chances for a solution to be found. We introduce a notion of CHC product allowing to formulate a lightweight iterative algorithm of merging recurrent computations into groups and prove its soundness. The evaluation over a set of systems handling lists and linear integer arithmetic confirms that the transformed systems are drastically more simple to solve than the original ones.
ABSTRACT. Delete Resolution Asymmetric Tautology (DRAT) proofs have become a \emph{de facto} standard to certify unsatisfiability results from SAT solvers with inprocessing. However, DRAT shows behaviors notably different from other proof systems: DRAT inferences are non-monotonic, and clauses that are not consequences of the premises can be derived. In this paper, we clarify some discrepancies on the notions of reverse unit propagation (RUP) clauses and asymmetric tautologies (AT), and furthermore develop the concept of resolution consequences. This allows us to present an intuitive explanation of RAT in terms of permissive definitions. We prove that a formula derived using RATs can be stratified into clause sets depending on which definitions they require, which give a strong invariant along RAT proofs. We furthermore study its interaction with clause deletion, characterizing DRAT derivability as satisfiability-preservation.
ABSTRACT. The main security mechanism for enforcing memory isolation in operating systems is provided by page tables. The hardware-implemented Translation Lookaside Buffer (TLB) caches these, and therefore the TLB and its consistency with memory are security critical for OS kernels, including formally verified kernels such as seL4. If performance is paramount, this consistency can be subtle to achieve; yet, all major formally verified kernels currently leave the TLB as an assumption.
In this paper, we present a formal model of the Memory Management Unit (MMU) for the ARM architecture which includes the TLB, its maintenance operations, and its derived properties. We integrate this specification into the Cambridge ARM model. We derive sufficient conditions for TLB consistency and abstract the functional details of the MMU for reasoning about executions in the presence of cached address translation.
ABSTRACT. We outline the implementation of a query compiler for relational queries that generates query plans with respect to a database schema, that is, a set of arbitrary first-order constraints, and a distinguished subset of predicate symbols from the underlying signature that correspond to access paths. The compiler is based on a variant of the Craig interpolation theorem, with reasoning realized via a modified analytic tableau proof procedure. This procedure decouples the generation of candidate plans that are interpolants from the tableau proof procedure, and applies A*-based search with respect to an external cost model to arbitrate among the alternative candidate plans. The tableau procedure itself is implemented as a virtual machine that operates on a compiled and optimized byte-code that faithfully implements reasoning with respect to the database schema constraints and a user query.