View: session overviewtalk overviewside by side with other conferences

Prof. Muggleton's career has concentrated on the development of theory, implementations and applications of Machine Learning, particularly in the field of Inductive Logic Programming. Stephen Muggleton's intellectual contributions within Machine Learning include the introduction of definitions for Inductive Logic Programming (ILP), Predicate Invention, Inverse Resolution, Closed World Specialisation, Predicate Utility, Layered Learning, U-learnability, Self-saturation and Stochastic logic programs. Over the last decade he has collaborated increasingly with biological colleagues on applications of Machine Learning to Biological prediction tasks. These tasks have included the determination of protein structure, the activity of drugs and toxins and the assignment of gene function.

09:00 | Meta-Interpretive Learning of Logic Programs SPEAKER: Stephen Muggleton |

10:30 | SPEAKER: Alexander Steen ABSTRACT. We present a procedure for algorithmically embedding problems formulated in higher-order modal logic into classical higher-order logic. The procedure was implemented as a stand-alone tool and can be used as a preprocessor for turning TPTP THF-compliant theorem provers into provers for various modal logics. The choice of the concrete modal logic is thereby specified within the problem as a meta-logical statement. This specification format as well as the underlying semantics parameters are discussed, and the implementation and the operation of the tool are outlined. By combining our tool with one or more THF-compliant theorem provers we accomplish the most widely applicable modal logic theorem prover available to date, i.e. no other available prover covers more variants of propositional and quantified modal logics. Despite this generality, our approach remains competitive, at least for quantified modal logics, as our experiments demonstrate. |

11:00 | SPEAKER: Martin Suda ABSTRACT. Blocked clauses provide the basis for powerful reasoning techniques used in SAT, QBF, and DQBF solving. Their definition, which relies on a simple syntactic criterion, guarantees that they are both redundant and easy to find. In this paper, we lift the notion of blocked clauses to first-order logic. We introduce two types of blocked clauses, one for first-order logic with equality and the other for first-order logic without equality, and prove their redundancy. In addition, we give a polynomial algorithm for checking whether a clause is blocked. Based on our new notions of blocking, we implemented a novel first-order preprocessing tool. Our experiments showed that many first-order problems in the TPTP library contain a large number of blocked clauses. Moreover, we observed that their elimination can improve the performance of modern theorem provers, especially on satisfiable problem instances. |

11:30 | SPEAKER: unknown ABSTRACT. It is known that one can extract Craig interpolants from so-called local derivations. An interpolant extracted from such a derivation is a boolean combination of formulas occurring in the derivation. However, standard complete proof systems, such as superposition, for theories having the interpolation property are not necessarily complete for local proofs. In this paper we investigate interpolant extraction from non-local proofs in the superposition calculus and prove a number of general results about interpolant extraction and complexity of extracted interpolants. In particular, we prove that the number of quantifier alternations in first-order interpolants of formulas without quantifier alternations is unbounded. This result has far-reaching consequences for using local proofs as a foundation for interpolating proof systems - any such proof system should deal with formulas of arbitrary quantifier complexity. To search for alternatives for interpolating proof systems, we consider several variations on interpolation and local proofs. Namely, we give an algorithm for building interpolants from resolution refutations in logic without equality and discuss additional constraints when this approach can be also used for logic with equality. We finally propose a new direction related to interpolation via local proofs in first-order theories. |

12:00 | SPEAKER: unknown ABSTRACT. Delete Resolution Asymmetric Tautology (DRAT) proofs have become a \emph{de facto} standard to certify unsatisfiability results from SAT solvers with inprocessing. However, DRAT shows behaviors notably different from other proof systems: DRAT inferences are non-monotonic, and clauses that are not consequences of the premises can be derived. In this paper, we clarify some discrepancies on the notions of reverse unit propagation (RUP) clauses and asymmetric tautologies (AT), and furthermore develop the concept of resolution consequences. This allows us to present an intuitive explanation of RAT in terms of permissive definitions. We prove that a formula derived using RATs can be stratified into clause sets depending on which definitions they require, which give a strong invariant along RAT proofs. We furthermore study its interaction with clause deletion, characterizing DRAT derivability as satisfiability-preservation. |

14:00 | SPEAKER: Cezary Kaliszyk ABSTRACT. Application of deep learning techniques lies at the heart of several significant AI advances and breakthroughs in the past years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of go. Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification. Here we suggest deep learning based guidance to the proof process of E Prover. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them for selecting processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved. Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 6.67% of the first-order logic translations of the Mizar Mathematical Library theorem statements that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%. |

14:30 | SPEAKER: Ozan Kahramanogullari ABSTRACT. The deep inference presentation of multiplicative exponential linear logic (MELL) benefits from a rich combinatoric analysis with many more proofs in comparison to its sequent calculus presentation. In the deep inference setting, all the sequent calculus proofs are preserved. Moreover, many other proofs become available, and some of these proofs are much shorter. However, proof search in deep inference is subject to a greater nondeterminism, and this nondeterminism constitutes a bottleneck for applications. To this end, we address the problem of reducing nondeterminism in the deep inference presentation of MELL by refining and extending our technique that has been previously applied to multiplicative linear logic and classical logic. We show that, besides the nondeterminism in commutative contexts, the nondeterminism in exponential contexts can be reduced in a proof theoretically clean manner. We validate the improvement in accessing the shorter proofs by experiments with our implementation. |

15:00 | SPEAKER: Thibault Gauthier ABSTRACT. Techniques combining machine learning with translation to automated reasoning have recently become an important proof assistant component. Such “hammer” techniques complement traditional proof assistant automation as implemented by tactics and decision procedures. In this paper we present a unified proof assistant automation approach which attempts to automate the selection of appropriate tactics and tactic-sequences combined with an optimized small-scale hammering approach. We implement the technique as a tactic-level automation for HOL4: TacticToe. It implements a modified A*-algorithm directly in HOL4 that explores different tactic-level proof paths, guiding their selection by learning from a large number of previous tactic-level proofs. Unlike the existing hammer methods, TacticToe avoids translation to FOL, working directly on the HOL level. By combining tactic prediction and premise selection, TacticToe is able to re-prove 39% of 7902 HOL4 theorems in 5 seconds whereas the best single HOL(y)Hammer strategy solves 32% in the same amount of time. |

16:00 | SPEAKER: unknown ABSTRACT. We outline the implementation of a query compiler for relational queries that generates query plans with respect to a database schema, that is, a set of arbitrary first-order constraints, and a distinguished subset of predicate symbols from the underlying signature that correspond to access paths. The compiler is based on a variant of the Craig interpolation theorem, with reasoning realized via a modified analytic tableau proof procedure. This procedure decouples the generation of candidate plans that are interpolants from the tableau proof procedure, and applies A*-based search with respect to an external cost model to arbitrate among the alternative candidate plans. The tableau procedure itself is implemented as a virtual machine that operates on a compiled and optimized byte-code that faithfully implements reasoning with respect to the database schema constraints and a user query. |

16:15 | SPEAKER: Alexander Steen ABSTRACT. Leo-III is an automated theorem prover for (polymorphic) higher-order logic which supports all common TPTP dialects, including THF, TFF and FOF as well as their rank-1 polymorphic derivatives. It is based on a paramodulation calculus with ordering constraints and, in tradition of its predecessor LEO-II, heavily relies on cooperation with external first-order theorem provers. Unlike LEO-II, asynchronous cooperation with typed first-order provers and an agent-based internal cooperation scheme is supported. In this paper, we sketch Leo-III's underlying calculus, survey implementation details and give examples of use. |