IWIL-2015:Papers with Abstracts

Papers
Abstract. Logistics service supply chains (LSSCs) are composed of several nodes, with distinct behaviors, that ensure moving a product or service from a producer to consumer. Given the usage of LSSC in many safety-critical applications, such as hospitals, it is very important to ensure their reliable operation. For this purpose, many LSSC structures are modelled using Reliability Block Diagrams (RBDs) and their reliability is assessed using paper-and-pencil proofs or computer simulations. Due to their inherent incompleteness, these analysis techniques cannot ensure accurate reliability analysis results. In order to overcome this limitation, we propose to use higher-order-logic (HOL) theorem proving to conduct the RBD-based reliability analysis of LSSCs in this paper. In particular, we present the higher-order-logic formalizations of LSSNs with different and same types of capacities. As an illustrative example, we also present the formal reliability analysis of a simple three-node corporation.
Abstract. Extending first-order logic with ML-style polymorphism allows to define
generic axioms dealing with several sorts. Until recently, most
automated theorem provers relied on preprocess encodings into
mono/many-sorted logic to reason within such theories. In this paper, we
discuss the implementation of polymorphism into the
first-order tableau-based automated theorem prover Zenon. This
implementation leads to slightly modify some basic parts of the code,
from the representation of expressions to the proof-search algorithm.
Abstract. Although clausal propositional proofs are significantly smaller compared
to resolution proofs, their size is still too large for several
applications. In this paper we present several methods to compress
clausal proofs. These methods are based on a two phase approach. The
first phase consists of a light-weight compression algorithm that can
easily be added to satisfiability solvers that support the emission
of clausal proofs. In the second phase, we propose to use a powerful
off-the-shelf general-purpose compression tool, such as bzip2 and
7z. Sorting literals before compression facilitates a delta encoding,
which combined with variable-byte encoding improves the quality of the
compression. We show that clausal proofs can be compressed by one order
of magnitude by applying both phases.
Abstract. In this paper we describe our combined statistical/semantic parsing method based on the CYK chart-parsing algorithm augmented with limited internal typechecking and external ATP filtering. This method was previously evaluated on parsing ambiguous mathematical expressions over the informalized Flyspeck corpus of 20000 theorems. We first discuss the motivation and drawbacks of the first version of the CYK-based component of the algorithm, and then we propose and implement a more sophisticated approach based on better statistical model of mathematical data structures.
Abstract. We present a method to simplify expressions in the context of a formal, axiomatically defined, theory. In this paper, equality axioms are typically used but the method is more generally applicable. The key idea of the method is to represent large, even infinite, sets of expressions\footnote{We use the words ``term'' and ``expression'' as synonymous.} by means of a special data structure that allows us to apply axioms to the sets as a whole, not to single individual expressions. We then propose a bottom-up algorithm to finitely compute theories with a finite number of equivalence classes of equal terms. In that case, expressions can be simplified (i.e., minimized) in linear time by ``folding'' them on the computed representation of the theory. We demonstrate the method for boolean expressions with a small number of variables. Finally, we propose a ``goal oriented'' algorithm that computes only small parts of the underlying theory, in order to simplify a given particular expression. We show that the algorithm is able to simplify boolean expressions with many more variables but optimality cannot be certified anymore.
Abstract. A recursive function is well defined if its every recursive call
corresponds a decrease in some well-founded order. Such a function is
said to be _terminating_ and is in many applications the standard way
to define a function. A boolean function can also be defined as
an extreme solution to a recurrence relation, that is, as a least or
greatest fixpoint of some functor. Such _extreme predicates_ are
useful to encode a set of inductive or coinductive inference rules
and are at the core of many a constructive logic. The
verification-aware programming language Dafny supports both
terminating functions and extreme predicates. This tutorial
describes the difference in general terms, and then describes novel
syntactic support in Dafny for defining and proving lemmas with
extreme predicates. Various examples and considerations are given.
Although Dafny's verifier has at its core a first-order SMT solver,
Dafny's logical encoding makes it possible to reason about fixpoints
in an automated way.
Abstract. Modern CDCL SAT solvers generally save the variable value when backtracking. We propose a new measure called nbSAT based on the saved assignment to predict the usefulness of a learnt clause when reducing clause database in Glucose 3.0. Roughly speaking, The nbSAT value of a learnt clause is equal to the number of literals satised by the current partial assignment plus the number of other literals that would be satised by the saved assignment. We study the nbSAT measure by empirically show that it may be more accurate than the LBD measure originally used in Glucose. Based on this study, we implement an improvement in Glucose 3.0 to remove half of learnt clauses with large nbSAT values instead of half of clauses with large LBD values. This improvement, together with a resolution method to keep the learnt clauses or resolvents produced using a learnt clause that subsume an original clause, makes Glucose 3.0 more eective for the application and crafted instances from the SAT 2014 competition.
Abstract. The TPTP library is one of the leading problem libraries in the automated theorem proving community. Along the years, support was added for problems beyond those in first-order clausal form. Another addition was the augmentation of the language to support proofs outputted from theorem provers and the maintenance of a proof library, called TSTP. In this paper we propose another augmentation of the language for the support of the semantics of the inference rules used in these proofs.
Abstract. We present the proof search monad, a set of combinators that allows one to write a proof search engine in a style that resembles the formal rules closely. The user calls functions such as premise, prove or choice; the library then takes care of generating a derivation tree. Proof search engines written in this style enjoy: first, a one-to-one correspondence between the implementation and the derivation rules, which makes manual inspection easier; second, proof witnesses “for free”, which makes a verified, independent validation approach easier too.
Abstract. The TPTP World is a well established infrastructure that supports research,
development, and deployment of Automated Theorem Proving (ATP) systems for
classical logics.
The TPTP world includes the TPTP problem library, the TSTP solution library,
standards for writing ATP problems and reporting ATP solutions, and it
provides tools and services for processing ATP problems and solutions.
This work describes a new component of the TPTP world - the Thousands of
Models for Theorem Provers (TMTP) Model Library.
This is a library of models for identified axiomatizations built from
axiom sets in the TPTP problem library, along with functions for efficiently
evaluating formulae wrt models, and tools for examining and processing
the models.
The TMTP supports the development of semantically guided theorem proving
ATP systems, provide examples for developers of model finding ATP systems,
and provides insights into the semantics of axiomatizations.
Abstract. We describe our experiments with several state-of-the-art automated theorem provers on the problems in Tarskian Geometry created by Beeson and Wos. In comparison to the manually-guided Otter proofs by Beeson and Wos, we can solve a large number of problems fully automatically, in particular thanks to the recent large-theory reasoning methods.