FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR SUNDAY, JULY 15TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 101B (FM)
Location: Maths LT2
09:00
Deadlock Detection for Actor-based Coroutines

ABSTRACT. In this paper we introduce a new method for detecting deadlock in the coroutine mode of the method execution of a single actor. The underlying actor-based language features asynchronous method calls and supports coroutines which allow for the cooperative scheduling of the tasks belonging to an actor. We model the local behavior of an actor as a well-structured transition system by means of predicate abstraction and derive the decidability of the occurrence of deadlocks caused by the coroutine mode of method execution.

09:30
An Algebraic Approach for Reasoning About Information Flow

ABSTRACT. This paper concerns the analysis of information leaks in security systems. We address the problem of specifying and analyzing large systems in the (standard) channel model used in quantitative information flow (QIF). We propose several operators which match typical interactions between system components. We explore their algebraic properties with respect to the security-preserving refinement relation defined by Alvim et al. and McIver et al. We show how the algebra can be used to simplify large system specifications in order to facilitate the computation of information leakage bounds. We demonstrate our results on the specification and analysis of the Crowds Protocol. Finally, we use the algebra to justify a new algorithm to compute leakage bounds for this protocol.

10:00
Towards `Verifying' a Water Treatment System
SPEAKER: Jingyi Wang

ABSTRACT. Modeling and verifying real-world cyber-physical systems are challenging, especially so for complex systems where manually modeling is infeasible. In this work, we report our experience on combining model learning and abstraction refinement to analyze a challenging system, i.e., a real-world Secure Water Treatment (SWaT) system. Given a set of safety requirements, the objective is to either show that the system is safe with a high probability (so that a system shutdown is rarely triggered due to safety violation) or otherwise. As the system is too complicated to be manually modelled, we apply latest automatic model learning techniques to construct a set of Markov chains through abstraction and refinement, based on two long system execution logs (one for training and the other for testing). For each probabilistic property, we either report it does not hold with a certain level of probabilistic confidence, or report that it holds by showing the evidence in the form of an abstract Markov chain. The Markov chains can subsequently be implemented as runtime monitors in SWaT. This is the first case study of applying model learning techniques to a real-world system as far as we know.

09:00-10:00 Session 101C: ICLP Invited Talk: Elvira Albert (ICLP)
Chair:
09:00
Avoiding redundancies in the exploration of concurrent programs

ABSTRACT. Verification and testing of concurrent programs is challenged by the combinatorial explosion that arises by considering all possible ways in which processes/threads can interleave. Partial Order Reduction (POR) techniques eliminate redundancies in the exploration of concurrent programs based on the idea that two interleavings can be considered equivalent if one can be obtained from the other by swapping adjacent, independent execution steps. This talk will overview recent context-sensitive, constrained, dynamic POR techniques incorporated in the SYCO system, a SYstematic testing tool for COncurrent programs. We will also describe recent applications of SYCO to verify Software-Defined Networks.

09:00-10:30 Session 101D: Logics and Calculi (IJCAR)
Location: Blavatnik LT1
09:00
A Resolution-Based Calculus for Preferential Logics

ABSTRACT. The vast majority of modal theorem provers implement modal tableau, or backwards proof search in (cut-free) sequent calculi. The design of suitable calculi is highly non-trivial, and employs nested sequents, labelled sequents and/or specifically designated transitional formulae. Theorem provers for first-order logic, on the other hand, are by and large based on resolution. In this paper, we present a resolution system for preference-based modal logics, specifically Burgess' system S. Our main technical results are soundness and completeness. Conceptually, we argue that resolution-based systems are not more difficult to design than cut-free sequent calculi but their purely syntactic nature makes them much better suited for implementation in automated reasoning systems.

09:30
Enumerating Justifications using Resolution

ABSTRACT. If a conclusion follows from a set of axioms, then its justification is a minimal subset of axioms for which the entailment holds. An entailment can have several justifications. Such justifications are commonly used for the purpose of debugging of incorrect entailments in Description Logic ontologies. Recently a number of SAT-based methods have been proposed that can enumerate all justifications for entailments in light-weight ontologies languages, such as EL. These methods work by encoding EL inferences by propositional Horn clauses, and finding minimal models that correspond to justifications using SAT solvers. In this paper, we propose a new procedure for enumeration of justifications that uses resolution with answer literals instead of SAT solvers. In comparison to SAT-based methods, our procedure can enumerate justifications in any user-defined order that extends the set inclusion relation. The procedure is easy to implement and, like resolution, can be parametrized with ordering and selection strategies. We have implemented this procedure in PULi---a new Java-based Proof Utility Library, and performed an empirical comparison of (several strategies of) our procedure and other SAT-based tools on popular EL ontologies. The experiments show that our procedure provides a comparable, if not better performance than those highly optimized tools. For example, using one of the strategies, we were able for the first time to compute all justifications for all concept subsumptions in one of the largest commonly used medical ontology Snomed CT.

10:00
A Tableaux Calculus for Reducing Proof Size

ABSTRACT. A tableau calculus is proposed, based on a compressed representation of clauses, where literals sharing a similar shape may be merged. The inferences applied on these literals are fused when possible, which reduces the size of the proof. It is shown that the obtained proof procedure is sound, refutationally complete and allows to reduce the size of the tableau by an exponential factor. The approach is compatible with all usual refinements of tableaux.

09:00-10:30 Session 101E: SAT (IJCAR)
Location: Blavatnik LT2
09:00
QRAT+: Generalizing QRAT by a More Powerful QBF Redundancy Property

ABSTRACT. The QRAT (quantified resolution asymmetric tautology) proof system simulates virtually all inference rules applied in state of the art quantified Boolean formula (QBF) reasoning tools. It consists of rules to rewrite a QBF by adding and deleting clauses and universal literals that have a certain redundancy property. To check for this redundancy property in QRAT, propositional unit propagation (UP) is applied to the quantifier free, i.e., purely propositional part of the QBF. We generalize the redundancy property in the QRAT system by QBF specific UP (QUP). QUP extends UP by the universal reduction operation to eliminate universal literals from clauses. We apply QUP to an abstraction of the QBF where certain universal quantifiers are converted into existential ones. This way, we obtain a generalization of QRAT which we call QRAT+. The resulting redundancy property in QRAT+ based on QUP is more powerful than the one in QRAT based on UP. We report on proof theoretical improvements and on experimental results to illustrate the benefits of using QRAT+ for QBF preprocessing.

09:30
Extended Resolution Simulates DRAT

ABSTRACT. We prove that extended resolution, a well-known proof system introduced by Tseitin, polynomially simulates DRAT, the standard proof system in modern SAT solving. Our simulation procedure takes as input a DRAT proof and transforms it into an extended-resolution proof whose size is only polynomial with respect to the original proof. Based on our simulation, we implemented a tool that transforms DRAT proofs into extended-resolution proofs. We ran our tool on several benchmark formulas to estimate the increase in size caused by our simulation in practice. Finally, as a side note, we show how blocked-clause addition, a generalization of the extension rule from extended-resolution, can be used to replace the addition of resolution asymmetric tautologies in DRAT without introducing new variables.

10:00
A SAT-Based Approach to Learn Explainable Decision Sets

ABSTRACT. The successes of machine learning in recent years triggered a fast growing range of applications. In important settings, including safety critical applications, accurate predictions do not suffice; one expects the machine learning model to also explain the predicions made, in forms understandable by humans. Recent work proposed explainable models based on decision sets which can be viewed as unordered sets of rules, respecting some sort of rule non-overlap constraint. This paper investigates existing solutions for computing decision sets and identifies a number of drawbacks, related with rule overlap and succinctness of explanations, the accuracy of achieved results, but also the efficiency of proposed approaches. To address these drawbacks, the paper develops novel SAT-based solutions for learning decision sets. Experimental results on computing decision sets for representative datasets demonstrate that SAT enables solutions that are not only the most efficient, but also offer stronger guarantees in terms of rule non-overlap.

09:10-10:00 Session 102: CAV Award (CAV)

The CAV 2018 Award is given to: SAT-based and Bounded Model Checking

  • Armin Biere
  • Alessandro Cimatti
  • Edmund M. Clarke
  • Daniel Kroening
  • Flavio Lerda
  • Yunshan Zhu

Bounded Model Checking has revolutionized the way model checking is used and perceived. It has increased the capabilities of model checkers by orders of magnitude, turning them into a standard tool for hardware verification and a very important component of the toolkit available for software verification. BMC changed the focus of model checking from full verification to bug-finding.  The BMC problem is defined as follows: Given a bound k, is there an erroneous computation of the system of length k?  This problem is transformed to a Boolean formula that is satisfiable if and only if the system includes a computation of length k, which violates the specification. Focusing on the bounded problem enabled the authors to exploit the progress that was made in SAT solving around the same time, and simultaneously to bootstrap the tremendous progress in satisfiability solving that we have seen since. While early implementation of BMC focused on hardware, CBMC has demonstrated how it can be applied to realistic programs written in C. The ability to apply verification directly to C programs gave an enormous boost to a very large spectrum of applications in hardware and software industry and in academic research. The application areas include error explanation and localization, concurrent programs, equivalence checking, cyber-physical systems and control systems, test vector generation, worst-case execution time, security, and many other practical applications.

Location: Maths LT1
10:00-10:30 Session 103A: Learning (CAV)
Location: Maths LT1
10:00
Learning Abstractions for Program Synthesis
SPEAKER: Xinyu Wang

ABSTRACT. Many example-guided program synthesis techniques use abstractions to prune the search space. While abstraction-based synthesis has proven to be very powerful, a domain expert needs to provide a suitable abstract domain, together with the abstract transformers of each DSL construct. However, coming up with useful abstractions can be non-trivial, as it requires both domain expertise and knowledge about the synthesizer. In this paper, we propose a new technique for learning abstractions that are useful for instantiating a general synthesis framework in a new domain. Given a DSL and a small set of training problems, our method uses tree interpolation to infer reusable predicate templates that speed up synthesis in a given domain. Our method also learns suitable abstract transformers by solving a certain kind of second-order constraint solving problem in a data-driven way. We have implemented the proposed method in a tool called Atlas and evaluate it in the context of the Blaze meta-synthesizer. Our evaluation shows that (a) Atlas can learn useful abstract domains and transformers from few training problems, and (b) the abstractions learned by Atlas allow Blaze to achieve significantly better results compared to manually-crafted abstractions.

10:15
The Learnability of Symbolic Automata

ABSTRACT. Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. In this paper, we study the problem of learning symbolic automata and exactly characterize under what conditions symbolic automata can be learned efficiently. First, we present \alg, an $L^*$-style algorithm for learning symbolic automata using membership and equivalence queries, which treats the predicates appearing on transitions as their own learnable entities. The main novelty of \alg is that it can take as input an algorithm $\Lambda$ for learning predicates in the underlying alphabet theory and it uses $\Lambda$ to infer the predicates appearing on the transitions in the target automaton. Using this idea, \alg is able to learn automata operating over alphabets theories in which predicates are efficiently learnable using membership and equivalence queries. Furthermore, we prove that a necessary condition for efficient learnability of an \SFA is that predicates in the underlying algebra are also efficiently learnable using queries. Our results settle the learnability of a large class of \SFA instances, leaving open only a subclass of \SFAs over efficiently learnable alphabet theories. We implement \alg in an open-source library and show that it can efficiently learn automata that cannot be learned using existing algorithms and significantly outperform existing automata learning algorithms over large alphabets.

10:00-10:30 Session 103B: ASP Applications (ICLP)
Chair:
10:00
Temporal Answer Set Programming on Finite Traces

ABSTRACT. In this paper, we introduce an alternative approach to Temporal Answer Set Programming that relies on a variation of Temporal Equilibrium Logic (TEL) for finite traces. This approach allows us to even out the expressiveness of TEL over infinite traces with the computational capacity of (incremental) Answer Set Programming (ASP). Also, we argue that finite traces are more natural when reasoning about action and change. As a result, our approach is readily implementable via multi-shot ASP systems and benefits from an extension of ASP's full-fledged input language with temporal operators. This includes future as well as past operators whose combination offers a rich temporal modeling language. For computation, we identify the class of temporal logic programs and prove that it constitutes a normal form for our approach. Finally, we outline two implementations, a generic one and an extension of clingo.

10:30-11:00Coffee Break
11:00-12:30 Session 104A: Runtime Verification, Hybrid and Timed Systems (CAV)
Location: Maths LT1
11:00
Reachable Set Over-approximation for Nonlinear Systems Using Piecewise Barrier Tubes
SPEAKER: Hui Kong

ABSTRACT. We address the problem of analysing the reachable set of a nonlinear continuous system by computing or approximating the flow pipe of its dynamics. The most widely employed approach to tackle this problem is to perform a numerical integration over a given time horizon based on Taylor expansion and interval arithmetic. However, this method results to be very conservative when there is a large difference in speed between trajectories as time progresses. In this paper, we propose to use combinations of barrier functions, which we call piecewise barrier tube (PBT), to overapproximate flow pipe. The basic idea of PBT is that for each segment of a flow pipe, a coarse box which is big enough to contain the segment is constructed using sampled simulation and then in the box we compute by linear programming a set of barrier functions (called barrier tube or BT for short) which work together to form a tube surrounding the flow pipe. The benefit of using PBT is that (1) is independent of time and hence can avoid being stretched and deformed by time (2) a small number of BTs can form a tight overapproximation for the flow pipe, which means that the computation required to decide whether the BTs intersect the unsafe set can be reduced significantly. We implemented a prototype tool called PBTS in C++. Experiment on some benchmark systems show that our approach is effective.

11:15
Spacetime Interpolants

ABSTRACT. Reachability analysis is difficult for hybrid automata with affine differential equations, because the reach set needs to be approximated. Promising abstraction techniques usually employ interval methods or template polyhedra. Interval methods account for dense time and guarantee soundness, and there are interval-based tools that overapproximate affine flowpipes. But interval methods impose bounded and rigid shapes, which make refinement expensive and fixpoint detection difficult. Template polyhedra, on the other hand, can be adapted flexibly and can be unbounded, but sound template refinement for unbounded reachability analysis has been implemented only for systems with piecewise constant dynamics. We capitalize on the advantages of both techniques, combining interval arithmetic and template polyhedra, using the former to abstract time and the latter to abstract space. During a CEGAR loop, whenever a spurious error trajectory is found, we compute additional space constraints and split time intervals, and use these spacetime interpolants to eliminate the counterexample. Spacetime interpolation offers a lazy, flexible framework for increasing precision while guaranteeing soundness, both for error avoidance and fixpoint detection. To the best of out knowledge, this is the first abstraction refinement scheme for the reachability analysis over unbounded and dense time of affine hybrid systems, which is both sound and automatic. We demonstrate the effectiveness of our algorithm with several benchmark examples, which cannot be handled by other tools.

11:30
Monitoring Weak Consistency
SPEAKER: Michael Emmi

ABSTRACT. High-performance implementations of distributed and multicore shared objects often guarantee only the weak consistency of their concurrent operations, foregoing the de-facto yet performance-restrictive consistency criterion of linearizability. While such weak consistency is often vital for achieving performance requirements, practical automation for checking weak-consistency is lacking. In principle, algorithmically checking the consistency of executions according to various weak-consistency criteria is hard: in addition to the enumeration of linearizations of an execution’s operations, such criteria generally demand the enumeration of possible visibility relations among the linearized operations; a priori, both enumerations are exponential.

In this work we identify an optimization to weak-consistency checking: rather than enumerating every possible visibility relation, it suffices to consider only the minimal visibility relations which adhere to the various constraints of the given criterion, for a significant class of consistency criteria. We demonstrate the soundness of this optimization, and describe an associated minimal-visibility consistency checking algorithm. Empirically, we show that our algorithm significantly outperforms the baseline weak-consistency checking algorithm, which naïvely enumerates all visibilities, and adds only modest overhead to the baseline linearizability checking algorithm, which does not enumerate visibilities.

11:45
Monitoring CTMCs By Multi-Clock Timed Automata
SPEAKER: Yijun Feng

ABSTRACT. This paper presents a numerical algorithm to verify continuous-time Markov chains (CTMCs) against multi-clock deterministic timed automata (DTA). These DTA allow for specifying properties that cannot be expressed in CSL, the logic for CTMCs used by state-of-the-art probabilistic model checkers. The core problem is to compute the probability of timed runs by the CTMC ${\cal C}$ that are accepted by the DTA ${\cal A}$. These likelihoods equal reachability probabilities in an embedded piecewise deterministic Markov process (EPDP) obtained as product ofn${\cal C}$ and ${\cal A}$'s region automaton. This paper provides a numerical algorithm to efficiently solve the PDEs describing these reachability probabilities. The key insight is to solve an ordinary differential equation (ODE) that exploits the specific characteristics of the product EPDP. We provide the numerical precision of our algorithm and present experimental results with a prototypical implementation.

12:00
Start Pruning When Time Gets Urgent: Partial Order Reduction for Timed Systems

ABSTRACT. Partial order reduction for timed systems is a challenging topic due to the dependencies among events induced by time acting as a global synchronization mechanism. So far, there has only been a limited success in finding practically applicable solutions yielding significant state space reductions. We suggest a working and efficient method to facilitate stubborn set reduction for timed systems with urgent behaviour. We first describe the framework in the general setting of timed labelled transition systems and then instantiate it to the case of timed-arc Petri nets. The basic idea is that we can employ classical untimed partial order reduction techniques as long as urgent behaviour is enforced. Our solution is implemented in the model checker TAPAAL and the feature is now broadly available to the users of the tool in its latest release from January 2018. By a series of larger case studies, we document the benefits of our method and its applicability to real-world scenarios.

12:15
A Counting Semantics for Monitoring LTL Specifications over Finite Traces

ABSTRACT. We consider the problem of monitoring a Linear Time Logic (LTL) specification that is defined on infinite paths, over finite traces. For example, we may need to draw a verdict on whether the system satisfies or violates the property ``p holds infinitely often.'' The problem is that there is always a continuation of a finite trace that satisfies the property and a different continuation that violates it.

We propose a two-step approach to address this problem. First, we introduce a counting semantics that computes the number of steps to witness the satisfaction or violation of a formula for each position in the trace. Second, we use this information to make a prediction on inconclusive suffixes. In particular, we consider a \emph{good} suffix to be one that is shorter than the longest witness for a satisfaction, and a \emph{bad} suffix to be shorter than or equal to the longest witness for a violation. Based on this assumption, we provide a verdict assessing whether a continuation of the execution on the same system will presumably satisfy or violate the property.

11:00-12:00 Session 104B: FM Invited Talk: Annabelle McIver (FM)
Location: Maths LT2
11:00
Privacy in Text Processing: An information flow perspective

ABSTRACT. The problem of text document obfuscation is to provide an automated mechanism which is able to make accessible the content of a text document without revealing the identity of its writer. This is more challenging than it seems, because an adversary equipped with powerful machine learning mechanisms is able to identify authorship (with good accuracy) where, for example, the name of the author has been redacted. Current obfuscation methods are ad hoc and have been shown to provide weak protection against such adversaries. Differential privacy, which is able to provide strong guarantees of privacy in some domains, has been thought not to be applicable to text processing.

In this paper we will review obfuscation as a quantitative information flow problem and explain how generalised differential privacy can be applied to this problem to provide strong anonymisation guarantees in a standard model for text processing.

11:00-12:30 Session 104C: Implementation I (ICLP)
11:00
Top-down and Bottom-up Evaluation Procedurally Integrated

ABSTRACT. This paper describes how XSB combines top-down and bottom-up computation through the mechanisms of variant tabling and subsumptive tabling with abstraction, respectively. It is well known that top-down evaluation of logical rules in Prolog has a procedural interpretation as recursive procedure invocation (Kowalski 1986). Tabling adds the intuition of short-circuiting redundant computations (Warren 1992). This paper shows how to introduce into tabled logic program evaluation a bottom-up component, whose procedural intuition is the initialization of a data structure, in which a relation is initially computed and filled, on first demand, and then used throughout the remainder of a larger computation for efficient lookup. This allows many Prolog programs to be expressed fully declaratively, programs which formerly required procedural features, such as assert, to be made efficient.

11:30
Certified Graph View Maintenance with Regular Datalog

ABSTRACT. We employ the Coq proof assistant to develop a mechanically-certified framework for evaluating graph queries and incrementally maintaining materialized graph instances, also called views. The language we use for defining queries and views is Regular Datalog (RD) -- a notable fragment of non-recursive Datalog that can express complex navigational queries, with transitive closure as native operator. We first design and encode the theory of RD and then mechanize a RD-specific evaluation algorithm capable of fine-grained, incremental graph view computation, which we prove sound with respect to the declarative RD semantics. By using the Coq extraction mechanism, we test an Ocaml version of the verified engine on a set of preliminary benchmarks.

Our development is particularly focused on leveraging existing verification and notational techniques to: a) define mechanized properties that can be easily understood by logicians and database researchers and b) attain formal verification with limited effort. Our work is the first step towards a unified, machine-verified, formal framework for dynamic graph query languages and their evaluation engines.

12:00
An iterative approach to precondition inference using constrained Horn clauses

ABSTRACT. We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of \emph{unsafe} initial states. The precondition then is the constraint satisfied by the complement of that set under-approximating the set of \emph{safe} initial states. This approach has been used before but demonstrated only on small transition systems. In this paper, we develop an iterative refinement algorithm for non-linear CHCs, and show that it produces much more precise, and in some cases optimal approximations of the safety conditions, and can scale to larger programs. The refinement algorithm uses partial evaluation and a form of counterexample-guided abstraction refinement techniques to focus on the relevant program states. Disjunctive constraints, which are essential to achieve good precision, are generated in a controlled way. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.

11:00-12:30 Session 104D: Formal Proofs (IJCAR)
Location: Blavatnik LT1
11:00
Constructive Decision via Redundancy-free Proof-Search

ABSTRACT. We give a constructive account of Kripke-Curry's method which was used to establish the decidability of Implicational Relevance Logic (R->). To sustain our approach, we mechanize this method in axiom-free Coq, abstracting away from the specific features of R-> to keep only the essential ingredients of the technique. In particular we show how to replace Kripke/Dickson's lemma by a constructive form of Ramsey's theorem based on the notion of almost full relation. We also explain how to replace König's lemma with an inductive form of Brouwer's Fan theorem. We instantiate our abstract proof to get a constructive decision procedure for R-> and discuss potential applications to other logical decidability problems.

11:30
Verifying Asymptotic Time Complexity of Imperative Programs in Isabelle
SPEAKER: Bohua Zhan

ABSTRACT. We present a framework in Isabelle for verifying asymptotic time complexity of imperative programs. We build upon an extension of Imperative HOL and its separation logic to include running time. In addition to the basic arguments, our framework is able to handle advanced techniques for time complexity analysis, such as the use of the Akra-Bazzi theorem and amortized analysis. Various automation is built and incorporated into the auto2 prover to reason about separation logic with time credits, and to derive asymptotic behavior of functions. As case studies, we verify the asymptotic time complexity (in addition to functional correctness) of imperative algorithms and data structures such as median of medians selection, Karatsuba's algorithm, and splay trees.

12:00
Formalizing Bachmair and Ganzinger's Ordered Resolution Prover

ABSTRACT. We present a formalization of the first half of Bachmair and Ganzinger's chapter on resolution theorem proving in Isabelle/HOL, culminating with a refutationally complete first-order prover based on ordered resolution with literal selection. We develop general infrastructure and methodology that can form the basis of completeness proofs for related calculi. Our work clarifies several of the fine points in the chapter's text, emphasizing the value of formal proofs in the field of automated reasoning.

11:00-12:30 Session 104E: SMT 2 (IJCAR)
Location: Blavatnik LT2
11:00
Exploring Approximations for Floating-Point Arithmetic using UppSAT

ABSTRACT. We consider the problem of solving floating-point constraints obtained from software verification. We present UppSAT - an new implementation of a systematic approximation refinement framework as an abstract SMT solver. Provided with an approximation and a decision procedure (implemented in an off-the-shelf SMT solver), UppSAT yields an approximating SMT solver. Additionally, UppSAT includes a library of predefined approximation components which can be combined and extended to define new encodings, orderings and solving strategies. We propose that UppSAT can be used as a sandbox for easy and flexible exploration of new approximations. To substantiate this, we explore several approximations of floating-point arithmetic. Approximations can be viewed as a composition of an encoding into a target theory, a precision ordering, and a number of strategies for model reconstruction and precision (or approximation) refinement. We present encodings of floating-point arithmetic into reduced precision floating-point arithmetic, real-arithmetic, and fixed-point arithmetic (encoded into the theory of bit-vectors in practice). In an experimental evaluation we compare the advantages and disadvantages of approximating solvers obtained by combining various encodings and decision procedures (based on existing, state-of-the-art SMT solvers for floating-point, real, and bit-vector arithmetic).

11:30
Datatypes with Shared Selectors

ABSTRACT. We introduce a new theory of algebraic datatypes where selector symbols can be shared between multiple constructors, thereby reducing the number of terms considered by current SMT-based solving approaches. We show the satisfiability problem for the traditional theory of algebraic datatypes can be reduced to problems where selectors are mapped to shared symbols based on a transformation provided in this paper. The use of shared selectors addresses a key bottleneck for an SMT-based enumerative approach to the Syntax-Guided Synthesis (SyGuS) problem. Our experimental evaluation of an implementation of the new theory in the solver CVC4 on syntax-guided synthesis and other domains shows evidence that the use of shared selectors improves state-of-the-art SMT-based approaches for datatype constraints.

12:00
A Reduction from Unbounded Linear Mixed Arithmetic Problems into Bounded Problems

ABSTRACT. We present a combination of the Mixed-Echelon-Hermite transformation and the Double-Bounded Reduction for systems of linear mixed arithmetic that preserve satisfiability and can be computed in polynomial time. Together, the two transformations turn any system of linear mixed constraints into a bounded system, i.e., a system for which termination can be achieved easily. Existing approaches for linear mixed arithmetic, e.g., branch-and-bound and cuts from proofs, only explore a finite search space after application of our two transformations. Instead of generating a priori bounds for the variables, e.g., as suggested by Papadimitriou, unbounded variables are eliminated through the two transformations. The transformations orient themselves on the structure of an input system instead of computing a priori (over-)approximations out of the available constants. Experiments provide further evidence to the efficiency of the transformations in practice. We also present a polynomial method for converting certificates of (un)satisfiability from the transformed to the original system.

12:00-12:30 Session 105 (FM)
Location: Maths LT2
12:00
FSM Inference from Long Traces

ABSTRACT. Inferring a minimal finite state machine (FSM) from a given set of traces is a fundamental problem in computer science. Although the problem is known to be NP-complete, it can be solved efficiently with SAT solvers when the given set of traces is relatively small. On the other hand, to infer an FSM equivalent to a machine which generates traces, the set of traces should be sufficiently representative and hence large. However, the existing SAT-based inference techniques do not scale well when the length and number of traces increase. In this paper, we propose a novel approach which processes long traces incrementally. The experimental results indicate that it scales sufficiently well and time it takes grows slowly with the size of traces.

12:30-14:00Lunch Break
14:00-15:30 Session 106A: Tools (CAV)
Chair:
Location: Maths LT1
14:00
Rabinizer 4: From LTL to Your Favourite Deterministic Automaton

ABSTRACT. We present Rabinizer 4, a tool set for translating formulae of linear temporal logic to different types of deterministic omega-automata. The tool set implements and optimizes several recent constructions, including the first implementation translating the frequency extension of LTL. Further, we provide a distribution of PRISM that links Rabinizer and offers model checking procedures for probabilistic systems that are not in the official PRISM distribution. Finally, we evaluate the performance and in cases with any previous implementations we show enhancements both in terms of the size of the automata and the computational time, due to algorithmic as well as implementation improvements.

14:15
Strix: Explicit Reactive Synthesis Strikes Back!

ABSTRACT. Strix is a new tool for reactive LTL synthesis combining a direct translation of LTL formulas into deterministic parity automata (DPA) and an efficient, multi-threaded explicit state solver for parity games. In brief, Strix (1) decomposes the given formula into simpler formulas, (2) translates these on-the-fly into DPAs based on the queries of the parity game solver, (3) composes the DPAs into a parity game, and at the same time already solves the intermediate games using strategy iteration, and (4) finally translates the wining strategy, if it exists, into a Mealy automaton or an AIGER cicuit with optional minimization using external tools. We experimentally demonstrate the applicability of our approach by a comparison with Party, BoSy and ltlsynt using the SYNTCOMP2017 benchmarks. In these experiments, our prototype can compete with BoSy and ltlsynt with only Party performing slightly better. In particular, our prototype successfully synthesizes the full and unmodified LTL specification of the AMBA protocol for n = 2 masters.

14:30
BTOR2, BtorMC and Boolector 3.0
SPEAKER: Armin Biere

ABSTRACT. We describe a new word-level model checking format to capture models of hardware and potentially software in a bit-precise manner. This simple, line-based and easy to parse format can be seen as a sorted extension of the word-level format BTOR. It uses design principles from the bit-level format AIGER and follows the semantics of the SMT-LIB logics of bit-vectors with arrays. This intermediate format can be used in various verification flows and is perfectly suited to establish a word-level model checking competition. A new open source model checker and bit-vector solver support this format. We further provide real-world word-level benchmarks on which these tools are evaluated.

14:45
Nagini: A Static Verifier for Python
SPEAKER: Marco Eilers

ABSTRACT. We present Nagini, an automated, modular verifier for statically-typed, concurrent Python 3 programs, built on the Viper verification infrastructure. Combining established concepts with new ideas, Nagini can verify memory safety, functional properties, termination, deadlock freedom, and input/output behavior. Our experiments show that Nagini is able to verify non-trivial properties of real-world Python code.

15:00
Peregrine: A Tool for the Analysis of Population Protocols
SPEAKER: Stefan Jaax

ABSTRACT. We introduce Peregrine, the first tool for the analysis and parameterized verification of population protocols. Population protocols are a model of computation very much studied by the distributed computing community, in which mobile anonymous agents interact stochastically to achieve a common task. Peregrine allows users to design protocols, to simulate them both manually and automatically, to gather statistics of properties such as convergence speed, and to verify correctness automatically. This paper describes the features of Peregrine and their implementation.

15:15
ADAC: Automated Design of Approximate Circuits
SPEAKER: Jiri Matyas

ABSTRACT. Approximate circuits with relaxed requirements on functional correctness play an important role in the development of resource-efficient computer systems. Designing approximate circuits is a very complex and time-demanding process trying to find optimal trade-offs between the approximation error and resource savings. In this paper, we present ADAC -- a novel framework for automated design of approximate arithmetic circuits. ADAC integrates in a unique way efficient simulation and formal methods for approximate equivalence checking into a search-based circuit optimisation. To make ADAC easily accessible, it is implemented as a module of the ABC tool: a state-of-the-art system for circuit synthesis and verification. Within several hours, ADAC is able to construct high-quality Pareto sets of complex circuits (including even 32-bit multipliers), providing useful trade-offs between the resource consumption and the error that is formally guaranteed. This demonstrates outstanding performance and scalability compared with other existing approaches

14:00-15:30 Session 106C: Learning and Reasoning (ICLP)
14:00
Exploiting Answer Set Programming with External Sources for Meta-Interpretive Learning

ABSTRACT. Meta-Interpretive Learning (MIL) learns logic programs from examples by instantiating meta-rules. The recent Metagol system efficiently solves MIL-problems by relying on the procedural bias imposed by Prolog. Its focus on positive examples, however, effects that Metagol can detect the derivability of negative examples only at a later check, which can severely hit performance. Viewing MIL-problems as combinatorial search problems, they can alternatively be solved by employing Answer Set Programming (ASP). Using a sophisticated ASP solver, we may expect that violations of negative examples can be propagated directly, but such an effect has never been explicitly exploited for general MIL. In fact, a straightforward ASP-encoding of MIL results in a huge search space due to a lack of procedural bias and the need for grounding. To address these challenging issues, we encode MIL in the HEX formalism, which is an extension of ASP that allows us to outsource the background knowledge, and we restrict the search space by modeling the procedural bias. This way, the import of constants from the background knowledge can for a given type of meta-rules be limited to the relevant ones. Moreover, by abstracting from term manipulations in the encoding and exploiting the HEX interface mechanism, the import of such constants can be prevented completely in order to avoid the grounding bottleneck. An experimental evaluation shows promising results.

14:30
Incremental and Iterative Learning of Answer Set Programs from Mutually Distinct Examples
SPEAKER: Arindam Mitra

ABSTRACT. Over these years the Artificial Intelligence (AI) community has produced several datasets which have given the machine learning algorithms the opportunity to learn various skills across various domains. However, a subclass of these machine learning algorithms that aimed at learning logic programs, namely the Inductive Logic Programming algorithms, have often failed at the task due to the vastness of these datasets. This has impacted the usability of knowledge representation and reasoning techniques in the development of AI systems. In this research, we try to address this scalability issue for the algorithms that learn Answer Set Programs. We present a sound and complete algorithm which takes the input in a slightly different manner and perform an efficient and more user controlled search for a solution. We show via experiments that our algorithm can learn from two popular datasets from machine learning community, namely bAbl (a question answering dataset) and MNIST (a dataset for handwritten digit recognition), which to the best of our knowledge was not previously possible. The system is publicly available at https://goo.gl/KdWAcV.

15:00
A Trajectory Calculus for Qualitative Spatial Reasoning Using Answer Set Programming

ABSTRACT. Spatial information is often expressed using qualitative terms such as natural language expressions instead of coordinates; reasoning over such terms has several practical applications, such as bus routes planning. Representing and reasoning on trajectories is a specific case of qualitative spatial reasoning that focuses on moving objects and their paths. In this work, we propose two versions of a trajectory calculus based on the allowed properties over trajectories, where trajectories are defined as a sequence of non-overlapping regions of a partitioned map. More specifically, if a given trajectory is allowed to start and finish at the same region, 6 base relations are defined (TC-6). If a given trajectory should have different start and finish regions but cycles are allowed within, 10 base relations are defined (TC-10). Both versions of the calculus are implemented as ASP programs; we propose several different encodings, including a generalised program capable of encoding any qualitative calculus in ASP. All proposed encodings are experimentally evaluated using a real-world dataset. Experiment results show that the best performing implementation can scale up to an input of 250 trajectories for TC-6 and 150 trajectories for TC-10 for the problem of discovering a consistent configuration, a significant improvement compared to previous ASP implementations for similar qualitative spatial and temporal calculi.

14:00-15:00 Session 106D: IJCAR Invited Talk: Martin Giese (IJCAR)
Location: Blavatnik LT1
14:00
Industrial Data Access – What are the Reasoning Problems? And is Reasoning the Problem?

ABSTRACT. Optique (http://optique-project.eu) was an EU FP7 project that ran from November 2012 to
October 2016. The main objective was to test the idea of “Ontology Based Data Access” (OBDA)
on real industrial applications. Concretely: to support the work of geologists and
geophysicists in the oil & gas company Statoil, and the work of turbine engineers at Siemens
AG. This line of work now continues in the nationally funded ‘Centre for Research-based
Innovation’ SIRIUS (http://sirius-labs.no) at the University of Oslo, with participation from
the Universities of Oxford and Trondheim, as well a large number of participating companies.

The software produced by the project features elaborate user interfaces, and no ∀ or ∃ can
be seen on the surface. Still, most of the functionality is controlled by an ontology, which
is nothing more than a set of axioms in a particular description logic. As a consequence, a
variety of reasoning tasks takes place under the hood, all the way from query optimisation,
via entity alignment and up to the user interface control code. This talk presents a
selection of these problems, both solved and as-yet unsolved.

Though logic and reasoning are close to the hearts of many of the researchers involved, the
success of the project was also dependent on other factors: interdisciplinary communication,
usability considerations, and many pragmatic compromises, to name some. And sometimes, these
would again lead to ‘nice’ research. The talk also covers some of these extra-logical aspects
of the project.

15:00-15:30 Session 107A (FM)
Location: Maths LT2
15:00
A Weakness Measure for GR(1) Formulae

ABSTRACT. In spite of the theoretical and algorithmic developments for system synthesis in recent years, little effort has been dedicated to quantifying the quality of the specifications used for synthesis. When dealing with unrealizable specifications, finding the weakest environment assumptions that would ensure realizability is typically a desirable property; in such context the weakness of the assumptions is a major quality parameter. The question of whether one assumption is weaker than another is commonly interpreted using implication or, equivalently, language inclusion. However, this interpretation does not provide any further insight into the weakness of assumptions when implication does not hold. To our knowledge, the only measure that is capable of comparing two formulae in this case is entropy, but even it fails to provide a sufficiently refined notion of weakness in case of GR(1) formulae, a subset of linear temporal logic formulae which is of particular interest in controller synthesis. In this paper we propose a more refined measure of weakness based on the Hausdorff dimension, a concept that captures the notion of size of the omega-language satisfying a linear temporal logic formula. We identify the conditions under which this measure is guaranteed to distinguish between weaker and stronger GR(1) formulae. We demonstrate through instances of application the usefulness of the weakness measure in computing GR(1) assumptions refinements.

15:00-15:30 Session 107B: Termination (IJCAR)
Location: Blavatnik LT1
15:00
Well-Founded Unions

ABSTRACT. Given two or more well-founded (terminating) binary relations, when can one be sure that their union is likewise well-founded? We suggest new conditions for an arbitrary number of relations, generalising known conditions for two relations. We also provide counterexamples to several potential weakenings. All proofs have been machine checked.

15:30-16:00Coffee Break
16:00-17:00 Session 108A: Probabilistic Systems (CAV)
Location: Maths LT1
16:00
Value Iteration for Simple Stochastic Games: Stopping Criterion and Learning Algorithm

ABSTRACT. Simple stochastic games can be solved by value iteration (VI), which yields a sequence of under-approximations of the value of the game. This sequence is guaranteed to converge to the value only in the limit. Since no stopping criterion is known, this technique does not provide any guarantees on its results. We provide the first stopping criterion for VI on simple stochastic games. It is achieved by additionally computing a convergent sequence of over-approximations of the value, relying on an analysis of the game graph. Consequently, VI becomes an anytime algorithm returning the approximation of the value and the current error bound. As another consequence, we can provide a simulation-based asynchronous VI algorithm, which yields the same guarantees, but without necessarily exploring the whole game graph.

16:15
Sound Value Iteration
SPEAKER: Tim Quatmann

ABSTRACT. Computing reachability probabilities is at the heart of probabilistic model checking. All model checkers compute these probabilities in an iterative fashion using value iteration. This technique approximates a fixed point from below by determining reachability probabilities for an increasing number of steps. To avoid results that are significantly off, variants have recently been proposed that converge from both below and above. These procedures require starting values for both sides. We present an alternative that does not require the a priori computation of starting vectors and that converges faster on many benchmarks. The crux of our technique is to give tight and safe bounds — whose computation is cheap — on the reachability probabilities. Lifting this technique to expected rewards is trivial for both Markov chains and MDPs. Experimental results on a large set of benchmarks show its scalability and efficiency.

16:30
Safety-Aware Apprenticeship Learning
SPEAKER: Wenchao Li

ABSTRACT. Apprenticeship learning (AL) is a class of “learning from demonstrations” techniques where the reward function of a Markov Decision Process (MDP) is unknown to the learning agent and the agent has to derive a good policy by observing an expert’s demonstrations. In this paper, we study the problem of how to make AL algorithms inherently safe while still meeting its learning objective. We consider a setting where the unknown reward function is assumed to be a linear combination of a set of state features, and the safety property is specified in Probabilistic Computation Tree Logic (PCTL). By embedding probabilistic model checking inside AL, we propose a novel counterexample-guided approach that can ensure both safety and performance of the learnt policy. We demonstrate the effectiveness of our approach on several challenging AL scenarios where safety is essential.

16:45
Deciding Probabilistic Bisimilarity Distance One for Labelled Markov Chains
SPEAKER: Qiyi Tang

ABSTRACT. Probabilistic bisimilarity, due to Larsen and Skou, is an equivalence relation that captures which states of a labelled Markov chain behave the same. Since this behavioural equivalence only identifies states that transition to states that behave exactly the same with exactly the same probability, this notion of equivalence is not robust, as first observed by Giacalone, Jou and Smolka. The probabilistic bisimilarity distances, as first defined by Desharnais, Gupta, Jagadeesan and Panangaden, provide a quantitative generalization of probabilistic bisimilarity. The distance of states captures the similarity of their behaviour. The smaller the distance, the more alike the states behave. In particular, states are probabilistic bisimilar if and only if their distance is zero. This quantitative notion is robust in that small changes in the transition probabilities result in small changes in the distances.

During the last decade, several algorithms have been proposed to approximate and compute the probabilistic bisimilarity distances. The main result of this paper is an algorithm that decides distance one in O(n^2 + m^2), where n is the number of states and m is the number of transitions of the labelled Markov chain. The algorithm is the key new ingredient of our algorithm to compute the distances. The state of the art algorithm, that combines algorithms due to Derisavi, Hermanns and Sanders and due to Bacci, Bacci, Larsen and Mardare, can compute distances for labelled Markov chains up to 150 states. For one such labelled Markov chain, that algorithm takes more than 49 hours. In contrast, our new algorithm only takes 13 milliseconds. Furthermore, our algorithm can compute distances for labelled Markov chains with more than 10,000 states in less than 50 minutes.

16:00-18:00 Session 108B (FM)
Location: Maths LT2
16:00
Producing Explanations for Rich Logics

ABSTRACT. One of the claimed advantages of model checking is its capability to provide a counter-example explaining why a property is violated by a given system. Nevertheless, branching logics such as Computation Tree Logic and its extensions have complex branching counter-examples, and standard model checkers such as NuSMV do not produce complete counter-examples and are limited to single executions. Many branching logics can be translated into the mu-calculus. To solve this problem of producing complete and complex counter-examples for branching logics, we propose a mu-calculus-based framework with rich explanations. It integrates a mu-calculus model checker that produces complete explanations, and several functionalities to translate them back to the original logic. In addition to the framework itself, we describe its implementation in Python and illustrate its applicability with Alternating Temporal Logic.

16:30
The Compound Interest in Relaxing Punctuality

ABSTRACT. Imprecision in timing can sometimes be beneficial. Metric interval temporal logic (MITL), by disabling the expression of punctuality constraints, was shown to translate to timed automata, yielding an elementary decision procedure. We show how this principle extends to other forms of dense-time specification using regular expressions. By providing a clean, automaton-based formal framework for non-punctual languages, we are able to recover and extend several results in timed systems. Metric interval regular expressions (MIRE) are introduced, providing regular expressions with non-singular duration constraints. We obtain that MIRE are expressively complete relative to a class of one-clock timed automata, which can be determinized using additional clocks. Metric interval dynamic logic (MIDL) is then defined using MIRE as temporal modalities. We show that MIDL generalizes known extensions of MITL, while translating to timed automata at comparable cost.

17:00
IPL: An Integration Property Language for Multi-Model Cyber-Physical Systems

ABSTRACT. Design and verification of modern systems requires diverse models, which often come from a variety of disciplines, and it is challenging to manage their heterogeneity -- especially in the case of cyber-physical systems. To check consistency between models, recent approaches map these models to flexible static abstractions, such as architectural views. This model integration approach, however, comes at a cost of reduced expressiveness because complex behaviors of the models are abstracted away. As a result, it may be impossible to automatically verify important behavioral properties across multiple models, leaving systems vulnerable to subtle bugs. This paper introduces the Integration Property Language (IPL) that improves integration expressiveness using modular verification of properties that depend on detailed behavioral semantics while retaining the ability for static system-wide reasoning. We prove that the verification algorithm is sound and analyze its termination conditions. Furthermore, we perform a case study on a mobile robot to demonstrate IPL is practically useful and evaluate its performance.

17:30
Timed Epistemic Knowledge Bases for Social Networks
SPEAKER: Raúl Pardo

ABSTRACT. We present an epistemic logic equipped with time-stamps in atoms and epistemic operators, which enables reasoning about the moments at which events happen and knowledge is acquired or deduced. Our logic includes both an epistemic operator K and a belief operator B, to capture the disclosure of inaccurate information. Our main motivation is to describe rich privacy policies in online social networks (OSNs). Most of today's privacy policy mechanisms in existing OSNs allow only static policies. In our logic, it is possible to express rich dynamic policies in terms of the knowledge available to the different users and the precise time of actions and deductions. Our framework can be instantiated for different OSNs, by specifying the effect of the actions in the evolution of the social network and in the knowledge disclosed to each user. We present an algorithm for deducing knowledge and propagating beliefs, which can also be instantiated with different variants of how the epistemic information is preserved through time. Policies are modelled as formulae in the logic, which are interpreted over timed traces. Finally, we show that model-checking is decidable.

16:00-18:00 Session 108C: Technical Communications I (ICLP)
16:00
Cumulative Scoring-based Induction of Default Theories
SPEAKER: Gopal Gupta

ABSTRACT. Significant research has been conducted in recent years to extend Inductive Logic Programming (ILP) methods to induce a more expressive class of logic programs such as answer set programs. The methods proposed perform an exhaustive search for the correct hypothesis. Thus, they are sound but not scalable to real-life datasets. Lack of scalability and inability to deal with noisy data in real-life datasets restricts their applicability. In contrast, top-down ILP algorithms such as FOIL, can easily guide the search using heuristics and tolerate noise. They also scale up very well, due to the greedy nature of search for best hypothesis. However, in some cases, heuristics fail to direct the search in the correct direction. In this paper, we introduce the FOLD 2.0 algorithm---an enhanced version of our recently developed algorithm called FOLD. Our original FOLD algorithm automates the inductive learning of default theories. The enhancements presented here preserve the greedy nature of hypothesis search during clause specialization. These enhancements also avoid being stuck in local optima---a major pitfall of FOIL-like algorithms. Experiments that we report in this paper, suggest a significant improvement in terms of accuracy and expressiveness of the class of induced hypotheses. To the best of our knowledge, our FOLD 2.0 algorithm is the first heuristic based, scalable, and noise-resilient ILP system to induce answer set programs.

16:15
Epistemic Logic Programs with World View Constraints
SPEAKER: Patrick Kahl

ABSTRACT. An epistemic logic program is a set of rules written in the language of Epistemic Specifications, an extension of the language of answer set programming that provides for more powerful introspective reasoning through the use of modal operators K and M. We propose adding a new construct to Epistemic Specifications called a world view constraint that provides a universal device for expressing global constraints in the various versions of the language. We further propose the use of subjective literals (literals preceded by K or M) in rule heads as syntactic sugar for world view constraints. Additionally, we provide an algorithm for finding the world views of such programs.

16:30
SMT-based Answer Set Solver CMODELS(DIFF) (System Description)
SPEAKER: Da Shen

ABSTRACT. Many answer set solvers utilize Satisfiability solvers for search. SMT solvers extend Satisfiability solvers. This paper presents the CMODELS(DIFF) system that uses SMT solvers to find answer sets of a logic program. Its theoretical foundation is based on Niemela’s characterization of answer sets of a logic program via so called level rankings. The comparative experimental analysis demonstrates that CMODELS(DIFF) is a viable answer set solver.

16:45
Introspecting Preferences In Answer Set Programming

ABSTRACT. This paper develops a logic programming language, ASP$^\text{EP}$, that extends answer set programming language with a new epistemic operator $\succcurlyeq_x$ where $x\in\{\sharp,\supseteq\}$. The operator are used between two literals in rules bodies, and thus allows for the representation of introspections of preferences in the presence of multiple belief sets: $G\succcurlyeq_\sharp F$ expresses that $G$ is preferred to $F$ by the cardinality of the sets\ignore{ which can be read as \emph{$G$ is more possible than $F$}}, and $G\succcurlyeq_\supseteq F$ expresses $G$ is preferred to $F$ by the set-theoretic inclusion\ignore{ which can be read as \emph{$G$ is true whenever $F$ is true}}. We define the semantics of ASP$^\text{EP}$, explore the relation to the languages of strong introspections, give an algorithm for computing solutions of ASP$^\text{EP}$ programs, and study the applications of ASP$^\text{EP}$ by modeling the Monty Hall problem and the principle of majority.

17:00
Application of Logic-Based Methods to Machine Component Design
SPEAKER: Bram Aerts

ABSTRACT. This paper describes an application worked out in collaboration with a company that produces made-to-order machine components. The goal of the project is to develop a system that can support the company's engineers by automating parts of the component design process. We propose a knowledge extraction methodology based on the recent DMN (Decision Modelling Notation) standard and compare a rule-based and a constraint-based method for representing the resulting knowledge. We study the advantages and disadvantages of both approaches in the context of the company's real-life application.

17:15
Improving Candidate Quality of Probabilistic Logic Models
SPEAKER: Inês Dutra

ABSTRACT. Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning.

17:30
Natural Language Generation From Ontologies: Application Paper
SPEAKER: Van Nguyen

ABSTRACT. The paper addresses the problem of automatic generation of natural language descriptions for ontology- described artifacts. The motivation for the work is the challenge of providing textual descriptions of auto- matically generated scientific workflows (e.g., paragraphs that scientists can include in their publications). The paper presents two systems which generate descriptions of sets of atoms derived from a collection of ontologies. The first system, called nlgPhylogeny, demonstrates the feasibility of the task in the Phylotastic project, that aims at providing evolutionary biologists with a platform for automatic generation of phylogenetic trees given some suitable inputs. nlgPhylogeny utilizes the fact that the Grammatical Framework (GF) is suitable for the natural language generation (NLG) task; the paper shows how elements of the ontologies in Phylotastic, such as web services, inputs and outputs of web services, can be encoded in GF for the NLG task. The second system, called nlgOntology^A, is a generalization of nlgPhylogeny. It eliminates the requirement that a GF needs to be defined and proposes to use annotated ontologies for NLG. Given a set of annotated ontologies, nlgOntology^A will generate a GF suitable for the generation of natural language description of a set of atoms derived from these ontologies. The paper presents the algorithms used in the development of nlgPhylogeny and nlgOntologyA and discusses potential uses of these systems.

17:45
MASP-Reduce: a proposal for distributed computation of stable models
SPEAKER: Federico Igne

ABSTRACT. There has been an increasing interest in recent years towards the development of efficient solvers for Answer Set Programming (ASP) and towards the application of ASP to solve increasing more challenging problems. In particular, several recent efforts have explored the issue of scalability of ASP solvers when addressing the challenges caused by the need to ground the program before resolution. This paper offers an alternative solution to this challenge, focused on the use of distributed programming techniques to reason about ASP programs whose grounding would be prohibitive for mainstream ASP solvers. The work builds on the work proposed by Konczak et al. in 2004, which proposed a characterization of answer set solving as a form of non-standard graph coloring. The paper expands this characterization to include syntactic extensions used in modern ASP (e.g., choice rules, weight constraints). We present an implementation of the solver using a distributed programming framework specifically designed to manipulate very large graphs, as provided by Apache Spark, which in turn builds on the MapReduce programming framework. Finally, we provide a few preliminary results obtained from the first prototype implementation of this approach.

16:00-18:00 Session 108D: Logics, Frameworks and Proofs (IJCAR)
Location: Blavatnik LT1
16:00
From Syntactic Proofs to Combinatorial Proofs

ABSTRACT. In this paper we investigate Hughes' combinatorial proofs as notion of proof identity for classical logic. We show for various syntactic formalisms, including sequent calculus, analytic tableaux and resolution, how they can be translated into combinatorial proofs, and which notion of identity they enforce. This allows, in particular, to compare proofs that are given in different formalisms.

16:30
A Logical Framework with Commutative and Non-Commutative Subexponentials
SPEAKER: Vivek Nigam

ABSTRACT. Logical frameworks allow the specification of deductive systems using the same logical machinery. Linear logical frameworks have been successfully used for the specification of a number of computational, logics and proof systems. Its success lies on the fact that formulas can be distinguished as linear, which behave intuitively as resources, and unbounded, which behave intuitionistically. Commutative subexponentials enhance the expressiveness of linear logic frameworks by allowing the distinction of multiple contexts. These contexts may behave as multisets of formulas or sets of formulas. Motivated by applications in distributed systems and in type-logical grammar, we propose a linear logical framework containing both commutative and non-commutative subexponentials. Non-commutative subexponentials can be used to specify contexts which behave as lists, not multisets, of formulas. In addition, motivated by our applications in type-logical grammar, where the weakenening rule is disallowed, we investigate the proof theory of formulas that can only contract, but not weaken. In fact, our contraction is non-local. We demonstrate that under some conditions such formulas may be treated as unbounded formulas, which behave intuitionistically.

17:00
Uniform Substitution for Differential Game Logic

ABSTRACT. This paper presents a uniform substitution calculus for differential game logic (dGL). Church's uniform substitutions substitute a term or formula for a function or predicate symbol everywhere. After generalizing them to differential game logic and allowing for the substitution of hybrid games for game symbols, uniform substitutions make it possible to only use axioms instead of axiom schemata, thereby substantially simplifying implementations. Instead of subtle schema variables and soundness-critical side conditions on the occurrence patterns of logical variables to restrict infinitely many axiom schema instances to sound ones, the resulting axiomatization adopts only a finite number of ordinary dGL formulas as axioms, which uniform substitutions instantiate soundly. This paper proves the soundness of uniform substitution for the monotone modal logic dGL. The resulting axiomatization admits a straightforward modular implementation of dGL in theorem provers.

17:30
Theories as Types

ABSTRACT. Theories are an essential structuring principle that enable modularity, encapsulation, and reuse in formal libraries and programs (called classes there). Similar effects can be achieved by dependent record types. While the former forms a separate language layer, the latter is a normal part of the type theory. This overlap in functionality can render different systems non-interoperable and lead to duplication of work.

We present a type-theoretic calculus and implementation of a variant of record types that for a wide class of formal languages naturally corresponds to theories. Moreover, we can now elegantly obtain a contravariant functor that reflects the theory level into the object level: for each theory we obtain the type of its models and for every theory morphism a function between the corresponding types. In particular this allows shallow – and thus structure-preserving – encodings of mathematical knowledge and program specifications while allowing the use of object-level features on models, e.g. equality and quantification.

16:00-18:00 Session 108E: Verification (IJCAR)
Location: Blavatnik LT2
16:00
Checking Array Bounds by Abstract Interpretation and Symbolic Expressions
SPEAKER: Fausto Spoto

ABSTRACT. Array access out of bounds is a typical programming error. From the '70s, static analysis has been used to identify where such error actually occurs at runtime, through abstract interpretation into linear constraints. However, feasibility and scalability to modern object-oriented code has not been established yet. This article builds on previous work on linear constraints and shows that the result does not scale, when polyhedra implement the linear constraints, while the more abstract zones scale to the analysis of medium-size applications. Moreover, this article formalises the inclusion of symbolic expressions in the constraints and shows that this improves its precision. Expressions are automatically selected on-demand. The resulting analysis applies to code with dynamic memory allocation and arrays held in expressions. It is sound, also in the presence of arbitrary side-effects. It is fully defined in the abstract interpretation framework and does not use any code instrumentation. Its proof of correctness, its implementation inside the commercial Julia analyzer and experiments on third-party code complete the work.

16:30
A FOOLish Encoding of the Next State Relations of Imperative Programs

ABSTRACT. Automated theorem provers are routinely used in program analysis and verification for checking program properties. These properties are translated from program fragments to formulas expressed in the logic supported by the theorem prover. Such translations can be complex and require deep knowledge of how theorem provers work in order for the prover to succeed on the translated formulas. Our previous work introduced FOOL, a modification of first-order logic that extends it with syntactical constructs resembling features of programming languages. One can express program properties directly in FOOL and leave translations to plain first-order logic to the theorem prover. In this paper we present a FOOL encoding of the next state relations of imperative programs. Based on this encoding we implement a translation of imperative programs annotated with their pre- and post-conditions to partial correctness properties of these programs. We present experimental results which demonstrate that program properties translated using our method can be efficiently checked by the first-order theorem prover Vampire.

17:00
Proof-Producing Synthesis of CakeML with I/O and Local State from Monadic HOL Functions
SPEAKER: Son Ho

ABSTRACT. We introduce an automatic method for producing stateful ML programs together with proofs of correctness from monadic functions in HOL. Our mechanism supports references, exceptions, and I/O operations, and can generate functions manipulating local state, which can then be encapsulated for use in a pure context. We apply this approach to several non-trivial examples, including the type inferencer and register allocator of the otherwise pure CakeML compiler, which now benefits from better runtime performance. This development has been carried out in the HOL4 theorem prover.

17:30
A Why3 framework for reflection proofs and its application to GMP's algorithms

ABSTRACT. Earlier work showed that automatic verification of GMP's algorithms using Why3 exceeds the current capabilities of automatic solvers. To complete this verification, numerous cut indications had to be supplied by the user, slowing the project to a crawl. This paper shows how we have extended Why3 with a framework for proofs by reflection, with minimal impact on the trusted computing base. This framework makes it easy to write dedicated decision procedures that make full use of Why3's imperative features and are formally verified. We evaluate how much work could have been saved when verifying GMP's algorithms, had this framework been available. This approach opens the way to efficiently tackling the further verification of GMP's algorithms.