previous day
next day
all days

View: session overviewtalk overview

13:00-14:30 Session 6: Logic and Systems + Geometric Reasoning (II)

Regular papers

A Language with Type-Dependent Equality

ABSTRACT. Most formal systems use equality as an absolute property in the sense that two objects are equal or not. Even the ternary equality operator that takes a type as an additional argument is usually absolute in the sense the type does not affect the truth value of the equality.

We explore the design of formal systems where ternary equality may be relative and depend on the type. The most promising application of such systems is that it yields a more natural treatment of quotient types: if two objects can be different at the base type but equal at the quotient type, we can use the same representation in both types without having to use equivalence classes. The main drawback of such a system is that the substitution of equals by equals becomes more complex as it now depends on the type with which the objects are used.

Generating Custom Set Theories with Non-Set Structured Objects

ABSTRACT. Set theory has long been viewed as a foundation of mathematics, is pervasive in  mathematical culture, and is explicitly used by much written mathematics.  Because arrangements of sets can represent a vast multitude of mathematical objects, in most set theories every object is a set.  This causes confusion and adds difficulties to  formalising mathematics in set theory.  We wish to have set theory's features while also having many mathematical objects not be sets. 

A generalized set theory (GST) is a theory that has pure sets and may also have non-sets that can have internal structure and impure sets that mix sets and non-sets.  This paper provides a GST-building framework.  We show example GSTs that have sets and also 

  1. non-set ordered pairs, 
  2. non-set natural numbers,
  3. a non-set exception object that can not be inside another object
  4. modular combinations of these features. 

We show how to axiomatize GSTs and how to build models for GSTs in other GSTs.

A New Export of the Mizar Mathematical Library
PRESENTER: Colin Rothgang

ABSTRACT. The Mizar Mathematical Library is a prime target of library exports that make proof assistant libraries available to knowledge management or other deduction systems. The library has been exported multiple times in the past, including our own export from Mizar to OMDoc done in 2011. But the exporters tend to be very difficult to maintain.

We present a complete reimplementation of our previous export. It incorporates many lessons learnt and improvements made both on the Mizar and the OMDoc side.

Gauss-lintel, an algorithm suite for chord diagrams exploration

ABSTRACT. Gauss diagrams, or more generally chord diagrams are a well-established tool in the study of topology of knots and of planar curves. Not every Gauss diagram corresponds to a knot or a planar curve; if it does, it is called realizable. Many realizability criteria and decision algorithms have been proposed since the 1930s. Recent studies in 2018-2019 formulated especially simple conditions related to realizability which are expressible in terms of parity of chords intersections. In this paper we present system description of Gauss-lintel, an algorithm suite for chord diagram exploration, implemented in SWI-Prolog. Gauss-lintel utilizes a concept of "lintel", which is a list representation of an odd-even matching of the set of integers [0,...,2n-1], for efficient generation of Gauss diagrams satisfying various properties, including the variants of realizability conditions. We report on extensive experiments in generation and enumeration of various classes of Gauss diagrams, as well as on experimental (in)validation of published realizability conditions.

14:30-15:00Coffee Break
15:00-16:00 Session 7: Invited Talk (II)

Invited speaker

Doing Number Theory in Weak Systems of Arithmetic
16:00-16:30Coffee Break
16:30-17:00 Session 8: Formalization (I)

Regular papers

Formalization of RBD-based Cause Consequence Analysis in HOL

ABSTRACT. Cause consequence analysis is a safety assessment technique that is traditionally used to model the causes of subsystem failures in a critical system and their potential consequences using Fault Tree and Event Tree (ET) dependability modeling techniques, combined in a graphical Cause-Consequence Diagram (CCD). In this paper, we propose a novel idea of using Reliability Block Diagrams (RBD) for CCD analysis based on formal methods. Unlike Fault Trees, RBDs allow to model the success relationships of subsystem components to keep the entire subsystem reliable. To this end, we formalize in higher-order logic new mathematical formulations of CCD functions for the RBD modeling of generic n-subsystems using HOL4. This formalization enables universal n-level CCD analysis, based on RBDs and ETs, by determining the probabilities of multi-state safety classes, i.e., complete/partial failure and success, that can occur in the entire complex systems at the subsystem level.

17:30-18:30 Session 9: Doctoral Programme (II)

Doctoral programme

Deep Learning for Automated Theorem Proving

ABSTRACT. In some ways, machine learning is a natural fit for theorem proving. For one, we do not really need to care about testing robustness, as we have a built-in, absolute way of testing performance - are the correct proofs successfully constructed. While it may be useful during research, we do not really need to understand why do our heuristics give the answers they do, as long as we can verify the final proof. There are no consequences for the predictions being inaccurate, other than the failure to construct a proof.

Develop a mathematical e-dictionary and test it

ABSTRACT. \section{Research questions} The project examines how the usage an electronic dictionary affects learning terminology. We work with mathematic student teachers in the field of graph theory. We develop and test a domain-specific, bilingual e-dictionary. The project consists of two parts: create the dictionary in a highly automatic way and test its efficiency.

The first part will answer the following questions: Which tools help to create an e-dictionary in a highly automatically way? Do mathematical texts require special needs for term extraction? How can we use neural networks to extract definitions from mathematical texts?

The second part answers the following research questions: Which terminology do students use to describe graphs? Does their understanding of graph theory improve when they use a domain-specific e-dictionary? How do terminologic and conceptual knowledge influence each other? Does the usage of terms derived from the general language differ from that of loan words? How can a domain-specific dictionary help when translating specialized texts?

\section{Course of the research} The project began in summer 2018 and we plan to finish it in summer 2022.

For the first part, we created two comparable corpora with texts on graph theory in German and English. We have about 700.000 German tokens and one million English tokens, with 30.000 types each.

We extracted terms with two different methods, both pattern-based. One additionally took frequencies into account, the other simply relied on domain-specific patterns. Our working hypotheses was that due to the highly structured language in mathematics, it is not necessary to include frequencies. Experts rated the results and we saw that the inclusion of frequency gave slightly better results. The final lemma list includes 1453 terms.

In a next step we used a neural network to extract definitions for the terms. We used a BERT network, based on the huggingface library. For the English data we could rely on existing training data from another project. The German training data had to be created manually from the corpus. For the evaluation, we calculated precision, recall and F-score for a random sample of 200 sentences from each category because we did not have the capacity to manually annotate the whole data set. As another reference we used term lists on graphy theory provided by Wikipedia. We compared which terms are included in both, our extracted definitions and in the Wikipedia lists. Of course, we have to keep in mind that our tool is not able to find definitions for terms which are not mentioned in the corpus.

For the second part we conducted an experiment with mathematic undergraduate students. They are the target group of the dictionary. So far, we have done a first round with students using Wikipedia as a control group in summer 2020. At this point of their studies, they already have a basic understanding of scholar mathematics but do not have a particular knowledge of graph theory yet.

We gave them five different tasks. In the first task they were asked to describe given graphs. The second task asked them to recognize an isomorphism between two graphs. In the third task we wanted to explore if there are differences in the understanding of terminology if is derived form the general language. The fourth task asked them to list algorithm for a certain graph theoretic problem. In the last task they should translate sentences from English to German. 182 students participated in the experiment. We analyzed which words they used in the different tasks and derived conclusions about their mental concepts on the subject.

We plan another experiment with a comparable group and the same questions for summer 2021. In the first study we asked the students to use Wikipedia as an aid. In the second study, they will use our created dictionary. Thus, we can compare the results.

Currently, we prepare the dictionary for the second study. To that end the terms from the lemma list are combined with the extracted definitions. Further information for the dictionary entries is extracted pattern-based from the corpus, i.e. terms with a semantic relation to each other.

\section{Publication plans} Unfortunately we cannot make our corpora publicly available due to copyright restrictions. But the dictionary itself will be available online after a registration.

Preliminary results of the project were and will be presented at conferences on lexicography and related topics.