CICM-13: 13TH CONFERENCE ON INTELLIGENT COMPUTER MATHEMATICS
PROGRAM FOR WEDNESDAY, JULY 29TH
Days:
previous day
next day
all days

View: session overviewtalk overview

13:00-14:30 Session 8: Automated Theorem Proving
13:00
Induction with Generalization in Superposition Reasoning
PRESENTER: Petra Hozzová

ABSTRACT. We describe an extension of automating induction in superposition-based reasoning by strengthening inductive properties and generalizing terms over which induction should be applied. We implemented our approach in the first-order theorem prover Vampire and evaluated our work against state-of-the-art reasoners automating induction.We demonstrate the strength of our technique by showing that many interesting mathematical properties of natural numbers and lists can be proved automatically using this extension.

13:30
Metamath Zero: Designing a Theorem Prover Prover

ABSTRACT. As the usage of theorem prover technology expands, so too does the reliance on correctness of the tools. Metamath Zero is a verification system that aims for simplicity of logic and implementation, without compromising on efficiency of verification. It is formally specified in its own language, and supports a number of translations to and from other proof languages. This paper describes the abstract logic of Metamath Zero, essentially a multi-sorted first order logic, as well as the binary proof format and the way in which it can ensure essentially linear time verification while still being concise and efficient at scale. Metamath Zero currently holds the record for fastest verification of the \texttt{set.mm} Metamath library of proofs in ZFC (including 71 of Wiedijk's 100 formalization targets), at less than 200 ms. Ultimately, we intend to use it to verify the correctness of the implementation of the verifier down to binary executable, so it can be used as a root of trust for more complex proof systems.

14:00
Formally Verifying Proofs for Algebraic Identities of Matrices
PRESENTER: Leonard Schmitz

ABSTRACT. We consider proof certificates for identities of rectangular matrices. To automate their construction, we introduce an algebraic framework and supply explicit algorithms relying on non-commutative Groebner bases. We address not only verification, but also exploration and reasoning towards establishing new identities and even proving mathematical properties. Especially Groebner-driven  elimination theory navigates us to insightful conclusions. We present several applications, that is efficiently formalized proofs for important identities of matrices: cancellation properties of triple products with Moore--Penrose pseudoinverses, a generalized Sherman-Morrison-Woodbury formula and an automated derivation of feedback loops in the famous Youla controller parametrization from control theory.

For actual computations we employ the open source computer algebra system Singular which is used as a backend by systems like SAGE and OSCAR. The non-commutative extension  Letterplace provides users with all the required functionality. Singular has numerous conversion tools and supports various standards.Therefore, it can facilitate the integration with existing theorem provers.

15:00-16:00 Session 9: Machine Learning, Theorem Proving and AI
15:00
(Invited talk) A Promising Path Towards Autoformalization and General Artificial Intelligence

ABSTRACT. An autoformalization system is an AI that learns to read natural language content and to turn it into an abstract, machine verifiable formalization, ideally by bootstrapping from unlabeled training data with minimum human interaction. This is a difficult task in general, one that would require strong automated reasoning and automated natural language processing capabilities. In this paper, it is argued that autoformalization is a promising path for systems to learn sophisticated, general purpose reasoning in all domains of mathematics and computer science. This could have far reaching implications not just for mathematical research, but also for software synthesis. Here I provide the outline for a realistic path towards those goals and give a survey of recent results that support the feasibility of this direction.

16:30-17:45 Session 10: Machine Learning and Theorem Proving
16:30
The Tactician: A Seamless, Interactive Tactic Learner and Prover for Coq

ABSTRACT. We present Tactician, a tactic learner and prover for the Coq Proof Assistant. Tactician helps users make tactical proof decisions while they retain control over the general proof strategy. To this end, Tactician learns from previously written tactic scripts and gives users either suggestions about the next tactic to be executed or altogether takes over the burden of proof synthesis. Tactician’s goal is to provide users with a seamless, interactive, and intuitive experience together with robust and adaptive proof automation. In this paper, we give an overview of Tactician from the user’s point of view, regarding both day-to-day usage and issues of package dependency management while learning in the large. Finally, we give a peek into Tactician’s implementation as a Coq plugin and machine learning platform.

16:45
Simple Dataset for Proof Method Recommendation in Isabelle/HOL (Dataset Description)

ABSTRACT. Recently, a growing number of researchers have applied machine learning to assist users of interactive theorem provers. However, the expressive nature of underlying logics and esoteric structures of proof scripts impede machine l earning practitioners from achieving a large scale success in this field. In this data showcase, we present a simplified dataset that represents essence of tactic-based interactive theorem proving in Isabelle/HOL. Our simple data format allows machine learning practitioners to test their machine learning algorithms for proof tactic recommendation in Isabelle/HOL, even if they are unfamiliar with theorem proving.

17:00
Tree Neural Networks in HOL4

ABSTRACT. We present an implementation of tree neural networks within the proof assistant HOL4. Their architecture makes them naturally suited for approximating functions whose domain is a set of formulas. We measure the performance of our implementation and compare it with other machine learning predictors on the tasks of evaluating arithmetical expressions and estimating the truth of propositional formulas.

17:15
Guiding Connection Tableau by Recurrent Neural Networks

ABSTRACT. We present an experiment on applying a sequence-to-sequence recurrent neural system for learning clause choices in the extension steps in the connection tableau proof calculus. The neural system encodes a sequence of literals (or clauses) from a current branch from a partial proof tree to a hidden state (represented as a real vector); using it the system selects a clause for extending the proof tree. We describe the data used for the training, the training setup and the results. We compare the recurrent neural approach with another learning system -- gradient boosted trees. We present also an experiment in which the neural system does not select the next clause for the extension step, but tries to encode it from the available logical symbols, which is in a sense a conjecturing task.

17:30
First Datasets and Experiments with Neural Conjecturing

ABSTRACT. We describe several datasets and first experiments with creating conjectures by neural methods. The datasets are based on the Mizar Mathematical Library processed in several forms and the problems extracted from it by the MPTP system and proved by the E prover using the ENIGMA guidance. The conjecturing experiments use the Transformer architecture and in particular its GPT-2 implementation.