Mathematical Interface Theories for System Integration
ABSTRACT. Formal systems such as interactive theorem provers and computer algebra systems come with their own libraries or ontologies of mathematical concepts they operate on. These libraries are largely unconnected and (when considered collectively) contain huge duplications of content.
While the TPTP language already provides a standard for formal logics, making the test problems in its library available to various formal systems, a similar language and associated ontology for higher mathematical knowledge is needed to effectively transfer more advanced mathematical knowledge.
We present the Math-in-the-Middle library, a semi-formal archive of formal mathematics in MMT, intended to serve as a collection of interface theories for mathematical concepts.
Its abstract representation in OMDoc/MMT makes it easily parsable and hence available to any formal system, allowing it to serve as a common ontology to exchange knowledge across systems and logical foundations. Its capability to handle content on various levels of formality makes it conveniently extensible even with highly complicated definitions and theorems not easily expressible in terms of simple logical expressions.
The Math-in-the-Middle library is currently seeded with ~200 theories from algebra, arithmetics, calculus, topology, geometry, graph theory, probability theory, functional analysis and measure theory, as well as various theory morphisms between them.
It is already used to facilitate knowledge exchange between computer algebra systems, databases of mathematical objects and modelling software in the OpenDreamKit and MaMoRed projects.
ABSTRACT. In previous work, we translated the mathematical library of the IMPS system (at danger of being lost due to the system no longer actively developed) to OMDoc and partially verify the result within the MMT system against an implementation of the underlying logic of IMPS.
However, so far we only include proof scripts as informal (not machine-checkable) information in our translation. Hence, our goal now is to extend our implementation of the translation to also include as much formal information as possible about the IMPS proof commands, user-defined macetes and their combination language, with the hope of being able to also formally verify (parts of) the translated proofs via MMT.
ABSTRACT. We study possibilities of using symbol elimination
in program verification and synthesis.
We consider programs in which some instructions
or subprograms are not fully specified;
the task is to derive conditions on the
parameters (or subprograms) which imply inductivity
of certain properties.
We then propose a method for property-directed invariant
generation and analyze its properties.
We indicate how to use similar ideas for proving properties of
recursively defined functions.
CoRg: Commonsense Reasoning Using a Theorem Prover and Machine Learning
ABSTRACT. Many problems humans face in their everyday life cannot be solved by solely relying on explicitly given facts but so-called commonsense knowledge. These facts are often not formalized in knowledge bases, but can be derived indirectly, e.g., from text passages. However this results in incomplete and inconsistent knowledge. In recent research, many scientists address commonsense reasoning tasks using primarily information retrieval, machine learning or neural network techniques. While this works rather well on several benchmarks, this procedure remains a black box, unable to explain how the correct answer is derived. In our research we address this shortcoming and develop a explainable and responsible artificial intelligence by integrating a theorem prover into the reasoning process. This theorem prover will give a fundamental basis to the derivation process, as well as making it possible to properly explain it and ultimately making an attempt to explain human reasoning.
For this purpose, a reasoning chain was developed at the university in Koblenz which aims to solve commonsense reasoning problems, e.g., from the COPA benchmark set (Choice of Plausible Alternatives), each consisting of a problem and two possible answers. The chain converts the input into a logical representation using Knews and adds information from WordNet, ConceptNet and other Ontologies. This information is passed to the theorem prover Hyper, which conducts a logical model out of it. In the end, this model and the gathered information is fed into a machine learning algorithm, which determines the probabilities of the answer candidates for being the right answer. Primarily we will use recurrent neural networks, as they already proved their superiority at this tasks. More precisely we want to employ predictive neural networks, as they enable us to re-engineer the derivation process.
Our research goal within the CoRg project (Cognitive Reasoning), conducted together with the university in Koblenz, is to evaluate whether a theorem prover can improve this reasoning tasks in the developed chain. Furthermore we will evaluate different ontologies, theorem provers and machine learning algorithms. To address the incomplete and inconsistent knowledge we will also consider the combination of defeasible and normative reasoning. In the end, we will analyze which inputs were most influential to derive the results, e.g., with a principal component analysis on a predictive neural network. This will give us significant hints on how human reasoning might take place.
Deep Reasoning - Hardware Accelerated Artificial Intelligence
ABSTRACT. Most modern first-order theorem provers rely on methods like Superposition [1] and Resolution [5], where implications of a clause set are computed to eventually generate the empty clause. The chronological order of computing inferences is a key aspect of proof search and is crucial for the system’s performance. It is typically controlled by a clause selection heuristic.
For the task of clause selection, we present a novel heuristic based on artificial neural networks. Deep Learning has demonstrated significant success in several domains [3, 4]. In our approach, clauses are first embedded by a vocabulary that is learned during training and shares features among similar symbols. Compression of clauses to a fixed size is implemented by combining the concepts of dilated convolutions [8] and a dense block [2] to ensure a broad context. The features of a clause to be evaluated are combined with features of the negated conjecture and of the original axiom set, and are processed by a small classifier network. The training is optimized by extensive negative mining and a novel loss function. The heuristic is integrated and tested in the state of the art theorem prover E [6, 7]. In experiments, the network correctly classifies over 90% of all clauses in the test data set with respect to the question if they are useful for the proof or not. However, despite hardware acceleration, the Deep Learning based evaluation currently is several orders of magnitude slower than conventional heuristics, and hence only one from a sample of 25 new proof problems can be solved with the developed technique, highlighting the challenge for future work.
References
[1] Leo Bachmair and Harald Ganzinger. Rewrite-Based Equational Theorem Proving with Selection and Simplification. Journal of Logic and Computation, 3(4):217–247, 1994.
[2] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2261–2269, 2017.
[3] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
[4] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep Network Guided Proof Search. In Thomas Eiter and David Sands, editors, Proc. 21st LPAR, Maun, Botswana, volume 46 of EPiC Series in Computing, pages 85–105, 2017.
[5] J. A. Robinson. A Machine-Oriented Logic Based on the Resolution Principle. Journal of the ACM, 12(1):23–41, 1965.
[6] Stephan Schulz. E – A Brainiac Theorem Prover. Journal of AI Communications, 15(2/3):111–126, 2002.
[7] Stephan Schulz. System Description: E 1.8. In Ken McMillan, Aart Middeldorp, and Andrei Voronkov, editors, Proc. of the 19th LPAR, Stellenbosch, volume 8312 of LNCS, pages 735–743. Springer, 2013.
[8] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. Computing Research Repository (CoRR), abs/1511.07122, 2015.
Herbrand's Revenge - SAT Solving for First-Order Theorem Proving
ABSTRACT. We describe a light-weight integration of the propositional SAT
solver PicoSAT and the saturation-based superposition prover E. The
proof search is driven by the saturation prover. Periodically, the
saturation is interrupted, and all first-order clauses are
grounded. The resulting ground problem is converted to a
propositional format and handed to the SAT solver. If the SAT solver
reports unsatisfiability, the proof is extracted and reported on the
first-order level. First experiments demonstrate the viability of
the approach and suggest future extensions. They also yield
interesting information about the structure of the search space.
A Roadmap to Gradually Compare and Benchmarking Description Logic Calculi
ABSTRACT. The most expressive layer of the Semantic Web is based on Description Logic (DL) [1]. This fact naturally brought about a larger interest in this family of formalisms and their languages. The Semantic Web choice for DL was surely based in two main pros from these formalisms: its rich expressiveness and its inherent reasoning possibilities.
Concerning this last aspect, DL reasoners were developed with cutting-edge performance, implementing plenty of specific optimization techniques over tableaux based methods, which took over the field for years. On the other hand, promising methods may have been neglected in such a scenario, in which the tough competition is often focused on gains through optimizations. Therefore, perhaps there is still room available for “basic research” on DL reasoning, in the sense that other efficient calculi need to be adapted to DL, tuned and tested.
The purpose of this work is to stimulate research on trying out DL calculi (other than tableaux) by making a careful, detailed comparison between tableaux and other inference methods in a systematic way: first starting with simpler languages (like ALC) without any optimizations. Then gradually including optimizations and comparing them; and continuing these interactive steps: enhancing language expressivity, including optimizations, and testing until reaching the last advances on optimizations and the most expressive DL fragments such as SROIQ. The comparison can also be done by in terms of two other additional aspects: memory usage and algorithm asymptotic analysis, with worst and average cases, etc. The rationale is identifying whether there are fragments which are more suitable to certain inference methods, as well as which aspects or constructs (e.g., the costliest combinations, which usually involve inverses, nominals, equalities, etc) are sensitive to which calculus.
Concretely, the idea is to start out with our DL connection calculi [2,3] and reasoner, the RACCOON (ReAsoner based on the Connection Calculus Over ONtologies) [4], which for now takes on ALC, the basic DL fragment, and displayed a surprisingly effective performance in comparison with other well-established reasoners, such as Hermit, FaCT++ and Konclude.
ABSTRACT. We have formalized conflict-driven clause learning (CDCL) in Isabelle/HOL. Later we have refined the CDCL to include two crucial optimisation: two watched literals with blocking literals. We formalized the data structure and the invariants. Through a chain of refinements carried out using the Isabelle Refinement Framework, we target Imperative HOL and extract imperative Standard ML code. Although our solver is not competitive with the state of the art, it offers acceptable performance for some applications.
Hilbert Meets Isabelle: Formalisation of the DPRM Theorem in Isabelle
ABSTRACT. In 1900, at the International Congress of Mathematicians in Paris
and in a publication that shortly followed, David Hilbert
presented the twenty-three most important problems for the coming
century. The tenth problem concerns the solvability of Diophantine
equations and was resolved in 1970 by Yuri Matiyasevich (DPRM
theorem).
We conduct a formal verification of Matiyasevich's original proof
using the proof assistant Isabelle. We formally state each lemma
in Isabelle's formal language and prove each required step. The
complete formalised proof includes even the simplest details such
as commutativity of matrix addition and more abstract concepts
like termination of register machines. At the time of writing, we
have completely formalised the proof that exponentiation is
Diophantine --- Matiyasevich's key breakthrough from 1970. We have
also produced a model of a register machine and shown that given
equations describing a certain register machine we can encode them
as geometric sums (and hence, as exponential Diophantine
equations). Additional work will produce proofs that simulate a
register machine as equations. This will then result in a full
formalisation of the DPRM theorem.
Fourteen of the original Hilbert problems have been resolved and
none of the solutions have been formally verified yet. Our project
will be the first complete such formal verification of a Hilbert
problem. By formalising the DPRM theorem, we see our larger goal
as bringing one of the most important mathematical results the
twentieth century into the methods of the twenty-first.
ABSTRACT. We propose a system for large-scale theorem proving contests.
We hope that such contests could spark interest in the research
field, attract a new generation of theorem proving savants,
and foster competition among proof assistants.
For the proof assistant Isabelle, we construct and
evaluate a prototype implementation of our proposed system
architecture.
Computational Hermeneutics: Using Automated Reasoning for the Logical Analysis of Natural-Language Arguments
ABSTRACT. In this project, we want to explore the computer-supported application of formal logic (using automated theorem provers and model finders) to issues in philosophy concerned with: (i) the methodical evaluation and interpretation of arguments and, building upon this, we want to tackle (ii) the problem of formalization: how to search methodically for the most appropriate logical form(s) of a given natural-language argument, by casting its individual statements into expressions of some sufficiently expressive logic (classical or non-classical).
Computational hermeneutics is a holistic, iterative, trial-and-error enterprise, where we evaluate the adequacy of some candidate formalization of a sentence by computing the logical validity of the whole argument. We start with formalizations of some simple statements (taking them as tentative) and use them as stepping stones on the way to the formalization of other argument's sentences, repeating the procedure until arriving at a state of reflective equilibrium: A state where our beliefs have the highest degree of coherence and acceptability.
ABSTRACT. Leibniz wrote down a proof for the existence of God accompanied by a note on impossible concepts in Latin (a reproduction of the manuscript is presented). A possibly true contradiction is quite unusual and not compatible with mainstream logic or his “Theodicy”. His example of such a contradiction about an impossible concept, the “square circle”, is still popular in contemporary metaphysics. Similar examples make the proposition of contradiction and possible truth controversial. An alternative formalization approach using a different kind of concept composition is proposed.
ABSTRACT. Wanda is a fully automatic tool to analyse termination of higher-order term rewriting systems in various styles. In this talk I will present Wanda and discuss the underlying methodology, which includes both a discussion of the theory involved and the strategies for automation.