FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR WEDNESDAY, JULY 18TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 125A: ARQNL Invited Talk: Larry Moss (ARQNL)
09:00
Automated Reasoning In Natural Language: Current Directions

ABSTRACT. TBA

09:00-10:30 Session 125B (ASPOCP)
09:00
Invited Talk: Optimization Problems in Answer Set Programming

ABSTRACT. Answer set programming (ASP) is a declarative language for nonmonotonic reasoning based on stable model semantics, where stable models are classical models of the input program satisfying a stability condition: only necessary information is included in each of these models under the assumptions provided by the model itself for the unknown knowledge in the program, where unknown knowledge is encoded by means of default negation. Reasoning in presence of unknown knowledge is common for rational agents acting in the real world. It is also common that real world agents cannot meet all their desiderata, and therefore ASP programs may come with soft literals for representing numerical preferences over jointly incompatible conditions. Stable models are therefore associated with a cost given by the number of the unsatisfied soft literals, so that stable models of minimum cost are preferred. In this talk I will show algorithms and strategies for computing optimal stable models, discuss their properties and explain how they can be extended to handle some qualitative preferences.

10:00
anthem: Transforming gringo Programs into First-Order Theories (Preliminary Report)

ABSTRACT. In a recent paper by Harrison et al., the concept of program completion is extended to a large class of programs in the input language of the ASP grounder gringo. We would like to automate the process of generating and simplifying completion formulas for programs in that language, because examining the output produced by this kind of software may help programmers to see more clearly what their program does and to what degree its set of stable models conforms with their intentions. If a formal specification for the program is available, then it may be possible to use this software, in combination with automated reasoning tools, to verify that the program is correct. This note is a preliminary report on a project motivated by this idea.

09:00-10:30 Session 125C: Invited talk: Michael Tautschnig (AVOCS)

Formal Methods at Amazon Web Services

09:00
Formal Methods at Amazon Web Services

ABSTRACT. Security is a top priority at Amazon Web Services. As we have a shared responsibility model with customers, AWS manages the components from the operating system down to the physical security of the facilities; AWS customers are responsible for building secure applications on top of it. In this talk I will describe tools that support our customers in securing their applications, and tools that we use to secure the infrastructure. Under the hood, these tools use formal methods to provide guarantees to our customers.
 

09:00-10:30 Session 125D (ICLP-DC)
Chair:
Location: Maths L6
09:00
Proof-relevant resolution for elaboration of programming languages

ABSTRACT. Proof-relevant resolution is a new variant of resolution in Horn-clause logic and its extensions. We propose proof-relevant resolution as a systematic approach to elaboration in programming languages that is close to formal specification and hence allows for analysis of semantics of the language. We demonstrate the approach on two case studies; we describe a novel, proof-relevant approach to type inference and term synthesis in dependently types languages and we show how proof-relevant resolution allows for analysis of inductive and coinductive soundness of type class resolution. We conclude by a discussion of overall contributions of our current work.

09:30
Explaining Actual Causation via Reasoning about Actions and Change

ABSTRACT. In causality, an actual cause is often defined as an event responsible for bringing about a given outcome in a scenario. In practice, however, identifying this event alone is not always sufficient to provide a satisfactory explanation of how the outcome came to be. In this paper, we motivate this claim using well-known examples and present a novel framework for reasoning more deeply about actual causation. The framework reasons over a scenario and domain knowledge to identify additional events that helped to ``set the stage’’ for the outcome. By leveraging techniques from Reasoning about Actions and Change, the proposed approach supports reasoning over domains in which the evolution of the state of the world over time plays a critical role and enables one to identify and explain the circumstances that led to an outcome of interest. We utilize action language $\mathcal{AL}$ for defining the constructs of the framework. This language lends itself quite naturally to an automated translation to Answer Set Programming, using which, reasoning tasks of considerable complexity can be specified and executed. We speculate that a similar approach can also lead to the development of algorithms for our framework.

10:00
Probabalistic Action Language pBC+

ABSTRACT. We present an ongoing research on a probabilistic extension of action language BC+. Just like BC+ is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we call pBC+, is defined as a high-level notation of lpmln programs - a probabilistic extension of answer set programs.

As preliminary results accomplished, we illustrate how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled in pBC+ and computed using an implementation of LP^MLN.

For future work, we plan to develop a compiler that automatically translates pBC+ description into LP^MLN programs, as well as parameter learning in probabilistic action domains through lpmln weight learning. We will work on defining useful extensions of pBC+ to facilitate hypothetical/counterfactual reasoning. We will also find real-world applications, possibly in robotic domains, to empirically study the performance of this approach to probabilistic reasoning in action domains.

09:00-10:30 Session 125E: Logic and Practice of Programming (LPOP)
09:00
Opening and introduction
09:10
Invited Talk: Practical uses of Logic, Formal Methods, B and ProB

ABSTRACT. The B method is quite popular for developing provably correct software for safety critical railway systems, particularly for driverless trains. In recent years, the B method has also been used successfully for data validation (http://www.data-validation.fr). There, the B language has proven to be a compact way to express complex validation rules, and tools such as predicateB, Ovado or ProB can be used to provide high assurance validation engines, where a secondary toolchain validates the result of the primary toolchain.

This talk will give an overview of our experience in using logic-based formal methods in general and B in particular for industrial applications. We will also touch subjects such as training and readability and the implementation of ProB in Prolog. We will examine which features of B make it well suited for, e.g., the railway domain, but also point out some weaknesses and suggestions for future developments. We will also touch upon other formal methods such as Alloy or TLA+, as well as other constraint solving backends for B, not based on Prolog (SAT via Kodkod/Alloy and SMT via Z3 and CVC4).

09:50
Invited Talk: On the Development of Industrial Applications with ASP
SPEAKER: Nicola Leone

ABSTRACT. Answer Set Programming (ASP) is a powerful rule-based language for knowledge representation and reasoning that has been developed in the field of logic programming and nonmonotonic reasoning. After many years of basic research, the ASP technology has become mature for the development of significant real-world applications. In particular, the well-known ASP system DLV has undergone an industrial exploitation by a spin-off company called DLVSYSTEM srl, which has led to its successful usage in a number of industry-level applications. The success of DLV for applications development is due also to its endowment with powerful development tools, supporting researchers and software developers that simplify the integration of ASP in real-world applications which usually require to combine logic-based modules within a complete system featuring user interfaces, services etc. In this talk, we first recall the basics of the ASP language. Then, we overview our advanced development tools, and we report on the recent implementation of some challenging industry-level applications of our system.

09:00-10:30 Session 125F: Practical MSO Model Checking 1 (LaSh)
09:00
Monadic Second-Order Model Checking with Fly-Automata

ABSTRACT. A well-known algorithmic meta-theorem states that every graph property expressible by a monadic second-order (MSO in short) sentence possibly using edge set quantifications can be checked in linear time for graphs of bounded tree- width; furthermore, every graph property expressible by an MSO sentence not using edge set quantifications can be checked in linear time for graphs of bounded clique-width given by an appropriate algebraic term. The standard proofs construct from the sentences and the given bounds on tree-width or clique-width automata intended to run on tree-decompositions or clique-width terms. However, they have so many states that these constructions are not usable in practice. This is unavoidable by Frick and Grohe. To overcome this difficulty in cases that are not "toy examples", we have introduced fly-automata that do not store states and transitions but compute them whenever needed. They have been implemented and tested, in particular for checking colorability and so-called acyclic colorability. A subset of the implemented functionalities will be demonstated by using a web interface in the first part of the lecture.

10:05
Lazy Automata Techniques for WS1S
SPEAKER: Lukas Holik

ABSTRACT. We present a new decision procedure for the logic WS1S. It originates from the classical approach, which first builds an automaton accepting all models of a formula and then tests whether its language is empty. The main novelty is to test the emptiness on the fly, while constructing a symbolic, term-based representation of the automaton, and prune the constructed state space from parts irrelevant to the test. The pruning is done by a generalization of two techniques used in antichain-based language inclusion and universality checking of finite automata: subsumption and early termination. The richer structure of the WS1S decision problem allows us, however, to elaborate on these techniques in novel ways. Our experiments show that the proposed approach can in many cases significantly outperform the classical decision procedure (implemented in the MONA tool) as well as recently proposed alternatives.

09:00-10:30 Session 125G (MLP)
Location: Blavatnik LT1
09:00
Opening and demos

ABSTRACT. Informal opening of the event: a chance to start networking and ask for demos.

09:45
Bimodal Software Engineering

ABSTRACT. Source code is bimodal: it combines a formal algorithmic channel and a natural language channel of identifiers and comments. To date, most work has focused exclusively on a single channel.  This is a missed opportunity because the two channels interact:  the natural language often explains or summarizes the algorithmic channel, so information in one channel can be used to improve analyses of the other channel.  A canonical bimodal fact is identifier named “secret” (NL channel) printed to the console (AL channel).  To exploit such bimodal facts, one must overcome two challenges:  find cross-channel synchronisation points and handle noise in the form of ambiguity in the NL channel and imprecision in the AL channel.  Thus, bimodality is a natural fit for machine learning.  I will present RefiNym, a bimodal analysis that models code with *name-flows*, a dataflow graph augmented to track identifier names.  Conceptual types are logically different types that do not always coincide with program types.  Passwords and URLs are example conceptual types that can share the program type String.  RefiNym is an unsupervised method that mines a lattice of conceptual types from name-flows and reifies those conceptual types into distinct nominal types.  For the String type, we show that RefiNym minimises co-occurrence of disparate conceptual types in the same scope by 42%, thereby making it harder for a developer to inadvertently introduce an unintended flow.

09:00-10:30 Session 125H: Industrial Session (NSV)
09:00
Needs in formal methods for the validation of naval systems

ABSTRACT. Naval systems are becoming more and more complex as they are increasingly integrated with a significant increase in software parts. The complexity also comes from many modes in systems ranging from complete automation to secondary modes requiring the intervention of human operators. Validation naval systems must take into account all these aspects which are no longer manageable without the assistance of computer tools. This presentation will present examples of naval systems as well as properties to verify, for which real failures have been observed and for which the use of formal methods is necessary. A focus on performance properties and software validation will be realized during this presentation. The purpose of this presentation is to stimulate discussions between the industrial needs of formal validation in naval systems and the current academic solutions that could address these issues.

09:45
Challenges on the Application of Formal Methods for Validation and Verification in Aerospace Systems Engineering

ABSTRACT. The complexity of modern aircraft development is rising constantly pushed by the growth of the transportation market, the continuous technology improvements, the rising of new technologies, the development of more challenging regulations and the ever increasing competition. The high cost investments involved with the development of an aircraft and its constituent systems leads airframers and system suppliers to take critical design decisions early, with a potentially high risk on the development and operational costs of the aircraft. On the other hand, the increasing complexity makes the costs and risks connected with the verification and testing of aircraft systems and their integration constantly rising.

These considerations are pushing the aerospace industry towards growing deployment of model-based methods to anticipate design problems closer in time to when the design decisions are taken, to reduce the risk of turn-backs and migrate towards a virtual testing driven development process to gradually achieve right-the-first-time test rig campaigns and safety-of-flight.

In this evolving landscape, promising model-based technologies beyond simulation are emerging to support this view, including property falsification, automatic test generation and coverage and formal verification technologies. Nonetheless, there still remain compelling challenges to overcome to render these technologies widely applicable across the aerospace industry. In this paper, we present such challenges and position them in context of the development process of aerospace systems and relevant aerospace engineering standards.

09:00-10:30 Session 125I: Parallel graph algorithms (PLR)

On parallel algorithms working with graphs.

Chair:
09:00
Parallel Algorithms for Parameter Synthesis from Temporal Logic Specifications

ABSTRACT. We will present a set of parallel algorithms for parameter synthesis in parametrised dynamical systems from temporal logic specifications. The problem is for a dynamical system represented in terms of a finite-state transition system with parametrised transition relation to synthesise parameter values satisfying the given specification. Technically, the method is based on model checking algorithms adapted to such kind of transition systems. Additionally, we will address a specific subproblem of parameter synthesis targeting identification of attractors in systems dynamics. To that end, a novel parallel algorithm for detection of strongly connected components in a parameterised system will be presented. We will show that the underlying algorithms can gain advantages from efficient symbolic representations of sets of parameter valuations. To demonstrate the method, we will describe applications to models representing dynamics of biological systems with complex non-linear behaviour.

09:45
MASP-Reduce: a proposal for distributed computation of stable models
SPEAKER: Federico Igne

ABSTRACT. There has been an increasing interest in recent years towards the development of efficient solvers for Answer Set Programming (ASP) and towards the application of ASP to solve increasing more challenging problems. In particular, several recent efforts have explored the issue of scalability of ASP solvers when addressing the challenges caused by the need to ground the program before resolution. This paper offers an alternative solution to this challenge, focused on the use of distributed programming techniques to reason about ASP programs whose grounding would be prohibitive for mainstream ASP solvers. The work builds on the work proposed by Konczak et al. in 2004, which proposed a characterization of answer set solving as a form of non-standard graph coloring. The paper expands this characterization to include syntactic extensions used in modern ASP (e.g., choice rules, weight constraints). We present an implementation of the solver using a distributed programming framework specifically designed to manipulate very large graphs, as provided by Apache Spark, which in turn builds on the MapReduce programming framework. Finally, we provide a few preliminary results obtained from the first prototype implementation of this approach.

09:00-10:30 Session 125J (REFINE)
Location: Blavatnik LT2
09:00
Automated verification of linearizability
10:00
Correctness of Concurrent Objects under Weak Memory Models
SPEAKER: Robert Colvin

ABSTRACT. Reasoning about the correctness of concurrent objects on weak memory models supported by modern multicore architectures is complicated. Traditional notions such as linearizability do not directly apply as the effect of an operation can become visible after the operation call has returned.

In this paper we develop a theory for correctness of concurrent objects under weak memory models. Central to our definitions is the concept of observations which determine when effects of operations become visible, and hence determine the semantics of objects, under a given memory model. The resulting notion of correctness, called object refinement, is generic as it is parameterised by the memory model under consideration. Our theory enforces the minimal constraints on the placing of observations and on the semantics of objects that underlie object refinement. Object refinement is suitable as a reference for correctness when proving new proof methods for objects under weak memory models to be sound and complete.

09:00-10:30 Session 125K (SYNT)
Location: Maths LT3
09:00
Synthesizing safe AI-based controllers for cyber-physical systems

ABSTRACT. In this talk, we will look at the problem of reasoning about safety for cyber-physical systems using semi-automated methods to learn safety invariants. We will leverage this basic technique to reason about AI-based controllers, for example, neural network based controllers trained using reinforcement learning. We show a technique that can potentially scale to controllers using 1000s of neurons, while able to prove safety of the synthesized controller. Finally, we chart a roadmap for synthesizing correct-by-design controllers that use AI-based algorithms.

09:45
Meta-Interpretive Learning of Logic Programs

ABSTRACT. Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique aimed at supporting learning of recursive definitions. A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reusable parts.  MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computable functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representations with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs.  The talk will conclude by pointing to the many challenges which remain to be addressed within this new area.

09:00-10:30 Session 125L (TLA)
Location: Maths Boardroom
09:00
TLA+ in Engineering Systems: Quinceañera

ABSTRACT. Relating experience and anecdotes from 15 years applying TLA+ in a commercial software engineering environment with a look toward the future.

10:00
Modeling Virtual Machines and Interrupts in TLA+ & PlusCal

ABSTRACT. We present TLA+ and PlusCal specifications of the Arm Generic Interrupt Controller, the KVM hypervisor and their interactions. The objective is to verify safety properties such as correct delivery of interrupts, correct context switching, and correct migration of virtual CPUs, as well as liveness properties such as the eventual delivery of interrupts.

09:00-10:30 Session 125M (ThEdu)
09:00
Students' Proof Assistant (SPA)

ABSTRACT. The Students' Proof Assistant (SPA) aims to both teach how to use a proof assistant like Isabelle but also to teach how reliable proof assistants are built. Technically it is a miniature proof assistant inside the Isabelle proof assistant. In addition we conjecture that a good way to teach structured proving is with a concrete prover where the connection between semantics, proof system, and prover is clear. In fact, the proofs in Lamport's TLAPS proof assistant have a very similar structure to those in the declarative prover SPA. To illustrate this we compare a proof of Pelletier's problem 43 in TLAPS, Isabelle/Isar and SPA.

09:30
Natural Deduction Assistant (NaDeA)

ABSTRACT. We present the Natural Deduction Assistant (NaDeA) and discuss its advantages and disadvantages as a tool for teaching logic. In particular we compare our approach to natural deduction in the Isabelle proof assistant. NaDeA is available online: https://nadea.compute.dtu.dk/

10:00
Rating of Geometric Automated Theorem Provers

ABSTRACT. The field of geometric automated theorem provers has a long and rich history, form the early synthetic provers in the 50th of last century to nowadays provers.

Establishing a rating among them will be useful for the improvement of the current methods/implementations. With wider scope, more efficient, with readable proofs and with trustworthy proofs.

We need a common test bench: a common language to describe the geometric problems; a large set of problems and a set of measures to achieve such goal.

09:00-10:30 Session 125N: Model Checking I (VSTTE)

Model Checking I

Location: Maths LT2
09:00
Contract-based Compositional Verification of Infinite-State Reactive Systems

ABSTRACT. Model-based software development is a leading methodology for the construction of safety- and mission-critical embedded systems. Formal models of such systems can be validated, via formal verification or testing, against system-level requirements and modified as needed before the actual system is built. In many cases, source code can be even produced automatically from the model once the system designer is satisfied with it. As embedded systems become increasingly large and sophisticated the size and complexity of models grows correspondingly, making the verification of top-level requirements harder, especially in the case of infinite-state systems. We argue that, as with conventional software, contracts are an effective mechanism to establish boundaries between components in a system model, and can be used to aid the verification of system-level properties by using compositional reasoning techniques. Component-level contracts also enable formal analyses that provide more accurate feedback to identify sources of errors or the parts of a system that contribute to the satisfaction of a given requirement. This talk discusses our experience in designing an assume-guarantee-based contract language on top of the Lustre modeling language and leveraging it to extend the Kind 2 model checker with contract-based compositional reasoning techniques.

10:00
A Tree-Based Approach to Data Flow Proofs

ABSTRACT. Data flow proofs, which are classically based on data flow graphs, bear a promising practical potential for automatic verification. We base a fundamental investigation of data flow proofs on the notion of a data flow tree, which allows us to capture data flow at a finer level of granularity than data flow graphs. In particular, we characterize the exact relation between the data flow in actual executions and the data flow represented by the data flow graph.

09:00-10:30 Session 125P: Invited Speaker; Distributed Systems I (VaVAS)

Invited Speaker

Contributed Talks: Distributed Systems I

Location: Maths LT1
09:00
Formal Synthesis of Control Strategies for Dynamical Systems

ABSTRACT. In control theory, complex models of physical processes, such as systems of differential equations, are analyzed or controlled from simple specifications, such as stability and set invariance. In formal methods, rich specifications, such as formulae of temporal logics, are checked against simple models of software programs and digital circuits, such as finite transition systems. With the development and integration of cyber physical and safety critical systems, there is an increasing need for computational tools for verification and control of complex systems from rich, temporal logic specifications. In this talk, I will discuss a set of approaches to formal synthesis of control strategies for dynamical systems from temporal logic specifications. I will first show how automata games for finite systems can be extended to obtain conservative control strategies for low dimensional linear and multilinear dynamical systems. I will then present several methods to reduce conservativeness and improve the scalability of the control synthesis algorithms for more general classes of dynamics. I will illustrate the usefulness of these approaches with examples from robotics and traffic control.

10:00
Towards a Hierarchical-Control Architecture for Distributed Autonomous Systems (Extended Abstract - Paper)
SPEAKER: Saud Yonbawi

ABSTRACT. We present work-in-progress to develop a hierarchical-control architecture for distributed autonomous systems. The new architecture comprises (i) a decentralised system-level control loop that partitions the goals of the distributed autonomous system among its components under conservative assumptions, and (ii) component-level control loops responsible for achieving the component sub-goals. The operation of the two control loops is underpinned by formal models that are small enough to enable their verification at runtime. To illustrate the use of our approach, we briefly describe its application to a multi-robot search and rescue mission.

09:00-10:30 Session 125Q: Invited talk (WST)
09:00
Termination Checking and Invariant Synthesis for Affine Programs

ABSTRACT. In this talk we consider foundational questions in proving termination and synthesising invariants for affine programs---program whose assignments are given by affine expressions. We focus in particular on simple linear loops and consider both algebraic and semi-algebraic invariants. We emphasize the close relationship between deciding termination and constructing invariants and highlight mathematical tools in Diophantine approximation and algebraic geometry that underlie recent progress on these problems.

10:00-10:30 Session 126: ARQNL Regular Papers 1 (ARQNL)
10:00
Evidence Extraction from Parameterised Boolean Equation Systems

ABSTRACT. Model checking is a technique for automatically assessing the quality of software and hardware systems and designs. Given a formalisation of both the system behaviour and the requirements the system should meet, a model checker returns either a yes or a no. In case the answer is not as expected, it is desirable to provide feedback to the user as to why this is the case. Providing such feedback, however, is not straightforward if the requirement is expressed in a highly expressive logic such as the modal $\mu$-calculus, and when the decision problem is solved using intermediate formalisms. In this paper, we show how to extract witnesses and counterexamples from parameterised Boolean equation systems encoding the model checking problem for the first-order modal $\mu$-calculus. We have implemented our technique in the modelling and analysis toolset mCRL2 and showcase our approach on a few illustrative examples.

10:30-11:00Coffee Break
11:00-12:30 Session 127A: ARQNL Regular Papers 2 (ARQNL)
11:00
Labelled Calculi for QMLs with Non-Rigid and Non-Denoting Terms

ABSTRACT. We introduce labelled sequent calculi for quantified modal logics with non-rigid and and non-denoting terms. We prove that these calculi have the good structural properties of G3-style calculi. In particular, all rules are height-preserving invertible, weakening and contraction are height-preserving admissible and cut is admissible. Finally, we show that each calculus gives a proof-theoretic characterization of validity in the corresponding class of models.

11:30
An Outline for Simple Semi-automated Proof Assistants for First-order Modal Logics

ABSTRACT. Most theorem provers and proof assistants are written in imperative or functional programming languages. Recently, the claim that higher-order logic programming languages might be better suited for this task was revisited and a new interpreter, as well as new proof assistants based on it, were introduced. In this paper we follow these claims and describe a concise implementation of a prototype for a semi-automated proof assistant for first-order modal logic. The aim of this paper is to encourage the development of personal proof assistants and semi-automated provers for a variety of modal calculi.

12:00
Labelled Connection-based Proof Search for Multiplicative Intuitionistic Linear Logic

ABSTRACT. We propose a connection-based characterization for multiplicative intuitionistic linear logic (MILL) which is based on labels and constraints that capture Urquhart's possible worlds semantics of the logic. We briefly recall the purely syntactic sequent calculus for MILL, which we call LMILL and then propose a sound and complete labelled sequent calculus GMILL for MILL. We then show how to translate every LMILL proof into a GMILL proof. From this translation, refining our results on BI (The logic of Bunched Implications), we show the soundness of the connection-based characterization and its completeness without the need for any notion of multiplicity.

11:00-12:30 Session 127B (ASPOCP)
11:00
A Survey of Advances in Epistemic Logic Program Solvers

ABSTRACT. Recent research in extensions of Answer Set Programming has included a renewed interest in the language of Epistemic Specifications, which adds modal operators K ("known") and M ("may be true") to provide for more powerful introspective reasoning and enhanced capability, particularly when reasoning with incomplete information. An epistemic logic program is a set of rules in this language. Infused with the research has been the desire for an efficient solver to enable the practical use of such programs for problem solving. In this paper, we report on the current state of development of epistemic logic program solvers.

11:30
selp: A Single-Shot Epistemic Logic Program Solver

ABSTRACT. Epistemic Logic Programs (ELPs) are an extension of Answer Set Programming (ASP) with epistemic operators that allow for a form of meta-reasoning, that is, reasoning over multiple possible worlds. Existing ELP solving approaches generally rely on making multiple calls to an ASP solver in order to evaluate the ELP. However, in this paper, we show that there also exists a direct translation from ELPs into non-ground ASP with bounded arity. The resulting ASP program can thus be solved in a single shot. We then implement this encoding method, using recently proposed techniques to handle large, non-ground ASP rules, into a prototype ELP solving system called selp. This solver exhibits competitive performance on a set of ELP benchmark instances.

12:00
Partial Evaluation of Logic Programs in Vector Spaces
SPEAKER: Chiaki Sakama

ABSTRACT. In this paper, we introduce methods of encoding propositional logic programs in vector spaces. Interpretations are represented by vectors and programs are represented by matrices. The least model of a definite program is computed by multiplying interpretation vectors and program matrices. To optimize computation in vector spaces, we provide a method of partial evaluation of programs using linear algebra. Partial evaluation is done by unfolding rules in a program, and it is realized in a vector space by multiplying program matrices. We perform experiments using randomly generated programs and show that partial evaluation has potential for realizing efficient computation in huge scale of programs.

11:00-12:30 Session 127C: AVoCS Regular papers 1 (AVOCS)

Automated verification

11:00
Detecting Deadlocks in Formal System Models with Condition Synchronization

ABSTRACT. We present a novel notion of deadlock for tasks which synchronize on arbitrary boolean conditions and a sound deadlock analysis for it. Contrary to other approaches, our analysis aims to detect dead- locks caused by faulty system design, rather than implementation bugs. A deadlock is a circular dependency between multiple tasks, where a task dependents on a second task, if the execution of this second task enables the first one to resume. This requires the analysis of the side- effects of the computations. The analysis is implemented as an extension to the DECO tool, evaluated for the full coreABS language and uses the KeY-ABS theorem prover to reason about side-effects of computations.

11:30
Backward reachability analysis for timed automata with data variables
SPEAKER: Rebeka Farkas

ABSTRACT. Efficient techniques for reachability analysis of timed automata are zone-based methods that explore the reachable state space from the initial state, and SMT-based methods that perform backward search from the target states. It is also possible to perform backward exploration based on zones, but calculating predecessor states for systems with data variables is computationally expensive, prohibiting the successful application of this approach so far. In this paper we overcome this problem by combining the zone-based backward exploration algorithm with the weakest precondition operation for data variables. This combination allows us to handle diagonal constraints efficiently as opposed to zone-based forward search where most approaches require additional operations to ensure correctness. We demonstrate the applicability and compare the efficiency of the algorithm to existing forward exploration approaches by measurements performed on industrial case studies. We show that data variables can be handled efficiently by the weakest precondition operation but the large number of target states often prevents successful verification.

12:00
An Entailment Checker for Separation Logic with Inductive Definitions

ABSTRACT. In this paper, we present Inductor, a checker for entailments between mutually recursive predicates, whose inductive definitions contain ground constraints belonging to the quantifier-free fragment of Separation Logic. Our tool implements a proof-search method for a cyclic proof system that we have shown to be sound and complete, under certain semantic restrictions involving the set of constraints in a given inductive system. Dedicated decision procedures from the DPLL(T)-based SMT solver CVC4 are used to establish the satisfiability of Separation Logic formulae. Given inductive predicate definitions, an entailment query, and a proof-search strategy, Inductor uses a compact tree structure to explore all derivations enabled by the strategy. A successful result is accompanied by a proof, while an unsuccessful one is supported by a counterexample.

11:00-12:30 Session 127D (ICLP-DC)
Chair:
Location: Maths L6
11:00
The Learning-Knowledge-Reasoning Paradigm For Natural Language Understanding and Question Answering

ABSTRACT. Given a text, several questions can be asked. For some of these questions, the answer can be directly looked up from the text. However for several other questions, one might need to use additional knowledge and sophisticated reasoning to find the answer. Developing AI agents that can answer this kinds of questions and can also justify their answer is the focus of this research. Towards this goal, we use the language of Answer Set Programming as the knowledge representation and reasoning language for the agent. The question then arises, is how to obtain the additional knowledge? In this work we show that using existing Natural Language Processing parsers and a scalable Inductive Logic Programming algorithm it is possible to learn this additional knowledge (containing mostly commonsense knowledge) from question-answering datasets which then can be used for inference.

11:30
Natural Language Generation From Ontologies

ABSTRACT. The paper addresses the problem of automatic generation of natural language descriptions for ontology-described artifacts. The motivation for the work is the challenge of providing textual descriptions of automatically generated scientific workflows (e.g., paragraphs that scientists can include in their publications). The extended abstract presents a system which generates descriptions of sets of atoms derived from a collection of ontologies. The system, called \nlgP, demonstrates the feasibility of the task in the \emph{Phylotastic} project, that aims at providing evolutionary biologists with a platform for automatic generation of phylogenetic trees given some suitable inputs. \nlgP\ utilizes the fact that the Grammatical Framework (GF) is suitable for the natural language generation (NLG) task; the abstract shows how elements of the ontologies in Phylotastic, such as web services, inputs and outputs of web services, can be encoded in GF for the NLG task.

12:00
Knowledge Authoring and Question Answering via Controlled Natural Language

ABSTRACT. Knowledge authoring from text is process of automatically acquiring, organizing and structuring knowledge from text which can be used to perform question answering or complex reasoning. However, current state-of-the-art systems are limited by the fact that they are not able to construct the knowledge base with high quality as knowledge representation and reasoning (KRR) has a keen requirement for the accuracy of data. Controlled Natural Languages (CNLs) emerged as a way to improve the quality problem by restricting the flexibility of the language. However, they still fail to do so as sentences that express the same information may be represented by different forms. Current CNL systems have limited power to standardize the sentences into the same logical form. We solved this problem by building the Knowledge Authoring Logic Machine (KALM), which performs semantic analysis of English sentences and achieves superior accuracy of standardizing sentences that express the same meaning to the same logical representation. Besides, we developed the query part of KALM to perform question answering, which also achieves very high accuracy in query understanding.

11:00-12:30 Session 127E: Security Policies as Challenge Problems (LPOP)
Chair:
11:00
Introduction: Role-Based Access Control as a Programming Challenge
11:10
Security Policies in Constraint Handling Rules

ABSTRACT. Role-Based Access Control (RBAC) is a popular security policy framework. User access to resources is controlled by roles and privileges. Constraint Handling Rules (CHR) is a rule-based constraint logic language. CHR has been used to implement security policies for trust management systems like RBAC for more than two decades. In this extended abstract, we give some references to these works.

11:20
LPOP2018 XSB Position Paper

ABSTRACT. In this position paper we first describe a classic logic programming approach to the solution of (a portion) of the challenge problem involving RBAC. We use the XSB Tabled Prolog Language and system \cite{xsbmanual-07}, with ideas from Transaction Logic \cite{trans-tcs94xs}. Then we discuss issues that involve what would be required to use such an implementation in a real-world application requiring RBAC functionality. And finally we raise issues about the possible design of a general procedural application programming interface to logic (programming) systems. In analogy to ODBC (Open Data Base Connectivity), what might an OLSC (Open Logic System Connectivity) look like?

11:30
Role-Based Access Control via JASP
SPEAKER: Nicola Leone

ABSTRACT. In this paper, we answer the Role-Based Access Control (RBAC) challenge by showcasing the solution of RBAC components by using JASP, a flexible framework integrating ASP with Java. In JASP the programmer can simply embed ASP code in a Java program without caring about the interaction with the underlying ASP system. This way, it is possible solve seamlessly both tasks suitable for imperative and declarative specification as required by RBAC.

11:40
The RBAC challenge in the Knowledge Base Paradigm
11:50
Role-Based Access Control via LogicBlox
12:00
Logic-based Methods for Software Engineers and Business People

ABSTRACT. Both software engineers and business people tend to be reluctant to adopt logic-based methods. On the one hand, this may be due to unfamiliarity with ``scary'' logical syntax. To ameliorate this, we developed an API for a state-of-the-art logic system, using only standard Python syntax. On the other hand, logic-based methods might have more impact if they could be used directly by business people. The recent DMN standard that might help in this respect.

12:10
Easier Rules and Constraints for Programming

ABSTRACT. We discuss how rules and constraints might be made easier for more conventional programming. We use a language that extends DistAlgo, which extends Python, and we use the RBAC programming challenge plus distributed RBAC as examples.

12:20
Questions about RBAC challenge solutions
11:00-12:30 Session 127F: Practical MSO Model Checking 2 (LaSh)
11:00
Courcelle’s Theorem – A Game-Theoretic Approach

ABSTRACT. Courcelle’s Theorem states that every problem definable in Monadic Second- Order logic can be solved in linear time on structures of bounded treewidth, for example, by constructing a tree automaton that recognizes or rejects a tree decomposition of the structure. Existing, optimized software like the MONA tool can be used to build the corresponding tree automata, which for bounded treewidth are of constant size. Unfortunately, the constants involved can become extremely large – every quantifier alternation requires a power set construction for the automaton. Here, the required space can become a problem in practical applications. We present a direct approach based on model checking games, which avoids the expensive power set construction. The resulting tool solves some problems on graphs quite efficiently if the treewidth is moderate.

12:05
A Derivative-Based Decision Procedure for WS1S

ABSTRACT. Weak monadic second-order logic of one successor (WS1S) is a simple and natural formalism for specifying regular properties. WS1S is decidable, although the decision procedure's complexity is non-elementary. Typically, decision procedures for WS1S exploit the logic-automaton connection, i.e., they escape the simple and natural formalism by translating formulas into equally expressive but less concise regular structures such as finite automata. In this talk, I will present a decision procedure for WS1S that stays within the logical world by directly operating on formulas. The key operation is the derivative of a formula, modeled after Brzozowski's derivatives of regular expressions. The presented decision procedure has been formalized and proved correct in the interactive proof assistant Isabelle.

11:00-12:30 Session 127G (MLP)
Location: Blavatnik LT1
11:00
Static Analysis for Developer Efficiency with Infer

ABSTRACT. Infer is an open-source static analysis tool for Java, C, C++, and Objective-C. Infer has been successfully deployed at Facebook, where it identifies hundreds of potential bugs per month in mobile apps and backend code. Infer uses AI (Abstract Interpretation) and ML (more precisely the OCaml implementation) to analyse source code. This talk will present infer and attempt to draw bridges between infer and other AI/ML techniques.

11:45
Learning to Type

ABSTRACT. JavaScript developers enjoy the freedom of not having to specify types in their source code, but JavaScript is also notoriously harder to refactor or debug than a statically typed language. Type systems such as FlowType or TypeScript have been designed to extend JavaScript but adding type annotations to an existing code base can be a laborious task. In this paper, we present a Deep Learning system able to facilitate this process by accurately suggesting probable type annotations to all the relevant nodes of a program.

We generate our training set using runtime information, observing the types of various expressions during their execution. We represent programs as graphs using abstract syntax trees enriched with string information (e.g. variable names) and over 90 additional types of edges, encoding scope information, data flow, and the relative position of different nodes. We then feed these graphs into a tailored neural network architecture designed to propagate information efficiently across edges, support multiple edge types, and pay attention to different edges in different contexts. We finally use this attention mechanism to explain back to the user what drove each prediction.

Our experiments show that a good accuracy can be obtained with a relatively small amount of training data, suggesting that applying Deep Learning to the analysis of programs is somewhat easier than classical applications, where large amounts of training data are usually required.

11:00-12:30 Session 127H: Formal specification and verification (NSV)
11:00
Compositional Taylor Model Based Validated Integration

ABSTRACT. We present a compositional validated integration method based on Taylor models.

Our method combines solutions for lower dimensional subsystems into solutions

for a higher dimensional composite system, rather than attempting to solve the

higher dimensional system directly. We have implemented the method in an

extension of the Flow* tool. Our preliminary results are promising, suggesting

significant gains for some biological systems with nontrivial compositional

structure.

11:30
Automatic Generation and Formal Verification of an Interior Point Algorithm

ABSTRACT. With the increasing power of computers, real-time algorithms tends to become more complex and therefore require better guarantees

of safety. Among algorithms sustaining autonomous embedded systems, model predictive control (MPC) is now used to compute

online trajectories, for example in the SpaceX rocket landing. The core components of these algorithms, such as the convex

optimization function, will then have to be certified at some point. This paper focuses specifically on that problem and presents

a method to formally prove a primal linear programming implementation.

We explain how to write and annotate the code with Hoare triples in a way that eases their automatic proof. The proof process

itself is performed with the WP-plugin of Frama-C and only relies on SMT solvers.

Combined with a framework producing all together both the embedded code and its annotations, this work would permit to certify

advanced autonomous functions relying on online optimization.

12:00
Towards Enumerative Invariant Synthesis in SMT Solvers

ABSTRACT. In the past several years, syntax-guided synthesis (SyGuS) solvers have made significant progress as efficient backend reasoners in several applications. One approach for syntax-guided synthesis, as implemented in the SMT solver CVC4, has shown that SMT solvers can act as efficient stand-alone tools for synthesis. This talk focuses on a new enumerative approach for invariant synthesis in the SMT solver CVC4. We describe a divide-and-conquer approach that makes use of an evolving set of refinement lemmas that constrain concrete points of the invariant-to-synthesize. We show its applicability to synthesis in multiple SMT domains, including for synthesizing invariants that involve non-linear arithmetic.

11:00-12:30 Session 127I: Verifying parallel programs (PLR)

Techniques to verify parallel programs.

 

Chair:
11:00
Permission Inference for Array Programs

ABSTRACT. Information about the memory locations accessed by a program is, for instance, required for program parallelisation and program verification. Existing inference techniques for this information provide only partial solutions for the important class of array-manipulating programs. In this paper, we present a static analysis that infers the memory footprint of an array program in terms of permission pre- and postconditions as used, for example, in separation logic. This formulation allows our analysis to handle concurrent programs and produces specifications that can be used by verification tools.

Our analysis expresses the permissions required by a loop via maximum expressions over the individual loop iterations. These maximum expressions are then solved by a novel maximum elimination algorithm, in the spirit of quantifier elimination.

Our approach is sound and is implemented; an evaluation on existing benchmarks for memory safety of array programs demonstrates accurate results, even for programs with complex access patterns and nested loops.

11:45
RTLCheck: Automatically Verifying the Memory Consistency of Processor RTL

ABSTRACT. Paramount to the viability of a parallel architecture is the correct implementation of its memory consistency model (MCM). Although tools exist for verifying consistency models at several design levels, a problematic verification gap exists between checking an abstract microarchitectural specification of a consistency model and verifying that the actual processor RTL implements it correctly. This paper presents RTLCheck, a methodology and tool for narrowing the microarchitecture/RTL MCM verification gap. Given a set of microarchitectural axioms about MCM behavior, an RTL design, and user-provided mappings to assist in connecting the two, RTLCheck automatically generates the SystemVerilog Assertions (SVA) needed to verify that the implementation satisfies the microarchitectural specification for a given litmus test program. When combined with existing automated MCM verification tools, RTLCheck enables test-based full-stack MCM verification from high-level languages to RTL. We evaluate RTLCheck on a multicore version of the RISC-V V-scale processor, and discover a bug in its memory implementation. Once the bug is fixed, we verify that the multicore V-scale implementation satisfies sequential consistency across 56 litmus tests. The JasperGold property verifier finds com- plete proofs for 89% of our properties, and can find bounded proofs for the remaining properties.

11:00-12:30 Session 127J (REFINE)
Location: Blavatnik LT2
11:00
Challenges of specifying concurrent program components

ABSTRACT. The purpose of this paper is to review some of the challenges of formally specifying components of concurrent programs in a manner suitable for refining them to an implementation. We present some approaches to devising specifications, in some cases investigating different forms of specification suitable for different contexts. Our focus is on shared variable concurrency rather than event-based process algebras.

11:30
Programming Without Refinement

ABSTRACT. To derive a program for a given specification R means to find an artifact P that satisfies two conditions: P is executable in some programming language; and P is correct with respect to R. Refinement-based program derivation achieves this goal in a stepwise manner by enhancing executability while preserving correctness until we achieve complete executability. In this paper, we argue that it is possible to invert these properties, and to derive a program by enhancing correctness while preserving executability (i.e. proceeding from one executable program to another) until we achieve absolute correctness. Of course, this latter process is possible only if we know what it means to enhance correctness.

12:00
Ordering strict partial orders to model behavioural refinement

ABSTRACT. Software is now ubiquitous and involved in complex interactions with the human users and the physical world in so-called cyber-physical systems (CPS) where the management of time is a major issue. Separation of concerns is a key asset in the development of these ever more complex systems. Two different kinds of separation exist : a first one corresponds to the different steps in a development leading from the abstract requirements to the system implementation and is qualified as vertical. It matches the commonly used notion of refinement. A second one corresponds to the various components in the system architecture at a given level of refinement and is called horizontal. Refinement has been studied thoroughly for the data, functional and concurrency concerns while our work focuses on the time modeling concern. This contribution aims at providing a formal construct for the verification of vertical separation in time models, through the definition of an order between strict partial orders used to relate the different instants in asynchronous systems. This work has been conducted using the proof assistant Agda and is connected to a previous work on the asynchronous language CCSL, which has also been modeled using the same tool.

11:00-12:30 Session 127K (SYNT)
Location: Maths LT3
11:00
Reactive Synthesis from LTL Specification with Spot

ABSTRACT. We present ltlsynt, a new tool for reactive synthesis from LTL specifications. It relies on the efficiency of Spot to translate the input LTL specification to a deterministic parity automaton. The latter yields a parity game, which we solve with Zielonka's recursive algorithm.

The approach taken in ltlsynt was widely believed to be impractical, due to the double-exponential size of the parity game, and to the open status of the complexity of parity games resolution. ltlsynt ranked second of its track in the 2017 edition of the SYNTCOMP competition. This demonstrates the practical applicability of the parity game approach, when backed by efficient manipulations of omega-automata such as the ones provided by Spot. We present our approach and report on our experimental evaluation of ltlsynt to better understand its strengths and weaknesses.

11:18
On the Expressiveness of HyperLTL Synthesis
SPEAKER: Philip Lukert

ABSTRACT. Hyperproperties generalize trace properties, i.e., sets of traces to sets of sets of traces. Prominent examples are information-flow policies like observational determinism and noninterference. In this paper, we give an overview on the problem of automatically synthesizing implementations from hyperproperties given in the temporal logic HyperLTL. We show that HyperLTL synthesis subsumes various synthesis approaches studied in the literature: synthesis under incomplete information, distributed synthesis, symmetric synthesis, fault-tolerance synthesis, and synthesis with dynamic hierarchical information. As a case study, where the control of the flow of information and the distributivity of the system are equally important, we consider the dining cryptographers problem: neither well-studied LTL synthesis nor its distributed variant is sufficient for this problem. Our implementation of the (bounded) synthesis procedure for HyperLTL, called BoSyHyper, is able to automatically construct a solution to this problem. We present the decidability results for HyperLTL synthesis: we identify, based on the quantifier structure of the HyperLTL formula, for which fragments the synthesis problem remains decidable and for which fragments we have to rely on a bounded variation of the decision procedure.

11:36
Maximum Realizability for Linear Temporal Logic Specifications

ABSTRACT. Automatic synthesis from linear temporal logic (LTL) specifications is widely used in robotic motion planning and control of autonomous systems. A common specification pattern in such applications consists of an LTL formula describing the requirements on the behaviour of the system, together with a set of additional desirable properties. We study the synthesis problem in settings where the overall specification is unrealizable, more precisely, when some of the desirable properties have to be (temporarily) violated in order to satisfy the system's objective. We provide a quantitative semantics of sets of safety specifications, and use it to formalize the ``best-effort'' satisfaction of such soft specifications while satisfying the hard LTL specification. We propose an algorithm for synthesizing implementations that are optimal with respect to this quantitative semantics. Our method builds upon the idea of the bounded synthesis approach, and we develop a MaxSAT encoding which allows for maximizing the quantitative satisfaction of the safety specifications. We evaluate our algorithm on scenarios from robotics and power distribution networks.

11:54
Warm-Starting Fixed-Point Based Control Synthesis
SPEAKER: Zexiang Liu

ABSTRACT. In this work we propose a patching algorithm to incrementally modify controllers, synthesized to satisfy a temporal logic formula, when some of the control actions become unavailable. The main idea of the proposed algorithm is to ``warm-start" the synthesis process with an existing fixed-point based controller that has a larger action set. By exploiting the structure of the fixed-point based controllers, our algorithm avoids repeated computations while synthesizing a controller with restricted action set. Moreover, we show that the algorithm is sound and complete, that is, it provides the same guarantees as synthesizing a controller from scratch with the new action set. An example on synthesizing controllers for a simplified walking robot model under ground constraints is used to illustrate the approach. In this application, the ground constraints determine the action set and they might not be known a priori. Therefore it is of interest to quickly modify a controller synthesized for an unconstrained surface, when new constraints are encountered. Our simulations indicate that the proposed approach provides at least an order of magnitude speed-up compared to synthesizing a controller from scratch.

12:12
Safe, Automated and Formal Synthesis of Digital Controllers for Continuous Plants

ABSTRACT. We present a sound and automated approach to synthesizing safe, digital controllers for physical plants represented as linear, time-invariant models. The synthesis accounts for errors caused by the digitization effects introduced by digital controllers operating in either fixed- or floating-point arithmetic. Our approach uses counterexample-guided inductive synthesis (CEGIS): in the first phase an inductive generalisation engine produces a possible solution that is safe for some possible initial conditions but may not be safe for all initial conditions. Safety for all initial conditions is then verified in a second phase either via bounded model checking or abstract acceleration; if the verification step fails, a counterexample is provided to the inductive generalisation and the process iterates until a safe controller is obtained. We implemented our approach in a tool named DSSynth (Digital-System Synthesizer) and demonstrate its practical value by automatically synthesizing safe controllers for physical plant models from the digital control literature.

11:00-12:30 Session 127L (TLA)
Location: Maths Boardroom
11:00
An Animation Module for TLA+

ABSTRACT. The Animation module is a new TLA+ module which allows for the creation of interactive visualizations of TLC execution traces that can be run inside a web browser. The talk will present an overview of the core concepts of the Animation module and how it works, and then demonstrate a few examples of TLA+ specs that have been animated using the module.

11:45
BMCMT – Bounded Model Checking of TLA+ Specifications with SMT
SPEAKER: Igor Konnov

ABSTRACT. We present the development version of BMCMT, a symbolic model checker for TLA+. It finds, whether a TLA+ specification satisfies an invariant candidate by checking satisfiability of an SMT formula that encodes: (1) an execution of bounded length, and (2) preservation of the invariant candidate in every state of the execution. Our tool is still in the experimental phase, due to a number of challenges posed by TLA+ semantics to SMT solvers. We will discuss these challenges and our current approach to them in the talk. Our preliminary experiments show that BMCMT scales better than the standard TLA+ model checker TLC for large parameter values, e.g., when a TLA+ specification models a system of 10 processes, though TLC is more efficient for tiny parameters, e.g. when the system has 3 processes.

11:00-12:30 Session 127M (ThEdu)
11:00
Lucas Interpretation from Programmers’ Perspective

ABSTRACT. Within the process of pushing the ISAC prototype towards maturity for service in engineering courses one of the most crucial points is to make programming convenient in ISAC . Programming means to describe mathematical algorithms (a functional program without input/output, so no concern with didactics and dialogues, which are handled by a separate component not discussed here) of engineering problems by use of libraries of methods and specifications as well as of respective theories. ISAC builds upon the theorem prover Isabelle, which offers a convenient programming environment and a specific function package. This is considered an appropriate means to improve programming in ISAC as required. The first steps of improvement are described in this paper and the other steps are listed in detail.

11:30
GeoCoq: formalized foundations of geometry

ABSTRACT. Geometry is central in both the history and teaching of the structure of mathematical proofs and the axiomatic method.

In this talk, we will give an overview of the GeoCoq library. The library developed in collaboration with P.Boutry, G. Braun, C. Gries and P. Schreck consists in a formalization of geometry using the Coq proof assistant.

Starting from the foundations based on either Euclid’s, Tarski’s, or Hilbert’s axiom systems, we formalized the traditional results which can be used for high-school exercises.

We will discuss the axiomatizations and the automation necessary for use in the classroom.

Based on some examples taken from highschool exercices, we will revisit some traditional proofs from a formal point of view.

11:00-12:30 Session 127N: Model Checking II (VSTTE)

Model Checking II

Location: Maths LT2
11:00
Executable Counterexamples in Software Model Checking

ABSTRACT. Counterexamples, execution traces of the system that illustrate how an error state is reachable from the initial state, are essential for understanding verification failures. They are one of the most important features of Model Checkers which distinguish them from Abstract Interpretation and other Static Analysis techniques by providing a user with an actionable information to debug their system and/or the specification. While in Hardware and Protocol verification, the counterexamples are re-playable in the system, in Software Model Checking, counterexamples take a form of a textual report (often presented as a semi-structured document). This is problematic since it complicates the debugging process by not allowing to use existing processes and tools such as debuggers, delta-debuggers, fault localization, fault minimization, etc.

In this paper, we argue that the most useful form of a counterexample is an \emph{executable harness} that can be linked with the code under analysis (CUA) to produce an executable that exhibits the fault witnessed by the counterexample. A harness is different from a unit test since it can control CUA directly bypassing potentially complex logic that interprets program inputs. This makes harnesses easier to construct compared to complete unit tests. We describe harness generation that we have developed in the SeaHorn verification framework. We identify key challenges for generating harness from Model Checking counterexamples of complex memory manipulating programs that use many third party libraries and external calls. We validate our prototype on the verification benchmarks from Linux Device Drivers in SV-COMP. Finally, we discuss many of the open challenges and suggests avenues for future work.

11:30
Extending VIAP to Handle Array Programs

ABSTRACT. We have previously described a novel fully automated program verification system called VIAP primarily for verifying the safety properties of programs with integer assignments. In this paper, we extend it to programs with arrays. Our extension is not restricted to single dimensional arrays but general and works for multidimensional and nested arrays as well. In the most recent \textit{SV-COMP} 2018 competition, VIAP with array extension came in second in the ReachSafety-Arrays sub-category, behind \textit{VeriAbs}.

12:00
Lattice-Based Refinement in Bounded Model Checking

ABSTRACT. Abstract. In this paper we present an algorithm for bounded model-checking with SMT solvers of programs with library functions — either standard or user-defined. Typically, if the program correctness depends on the output of a library function, the model-checking process either treats this function as an uninterpreted function, or is required to use a theory under which the function in question is fully defined. The former approach leads to numerous spurious counter-examples, whereas the later faces the danger of the state-explosion problem, where the resulting formula is too large to be solved by means of modern SMT solvers.

We extend the approach of user-defined summaries and propose to represent the set of existing summaries for a given library function as a lattice of subsets of summaries, with the meet and join operations defined as intersection and union, respectively. The refinement process is then triggered by the lattice traversal, where in each node the SMT solver uses the subset of SMT summaries stored in this node to search for a satisfying assignment. The direction of the traversal is determined by the results of the concretisation of an abstract counterexample obtained at the current node. Our experimental results demonstrate that this approach allows to solve a number of instances that were previously unsolvable by the existing bounded model-checkers.

11:00-12:30 Session 127P: Distributed Systems II (VaVAS)

Contributed Talks: Distributed Systems II

Location: Maths LT1
11:00
Work-in-Progress: Adaptive Neighborhood Resizing for Stochastic Reachability in Distributed Flocking
SPEAKER: Anna Lukina

ABSTRACT. We present DAMPC, a distributed, adaptive-horizon and adaptive-neighborhood algorithm for solving the stochastic reachability problem in a distributed-flocking modeled as a Markov decision process. At each time step, DAMPC takes the following actions: First, every agent calls a centralized, adaptive-horizon model-predictive control AMPC algorithm to obtain an optimal solution for its local neighborhood. Second, the agents derive the flock-wide optimal solution through a sequence of consensus rounds. Third, the neighborhood is adaptively resized using a flock-wide, cost-based Lyapunov function V. This improves efficiency without compromising convergence. The proof of statistical global convergence is non-trivial and involves showing that V follows a monotonically decreasing trajectory despite potential fluctuations in cost and neighborhood size. We evaluate DAMPC's performance using statistical model checking. Our results demonstrate that, compared to AMPC, DAMPC achieves considerable speed-up (2 in some cases) with only a slightly lower rate of convergence. The smaller average neighborhood size and lookahead horizon demonstrate the benefits of the DAMPC approach for stochastic reachability problems involving any distributed controllable system that possesses a cost function.

11:30
Policing Functions for Machine Learning Systems

ABSTRACT. Machine learning systems typically involve complex decision making mechanisms while lack clear and concise specifications. Demonstrating the quality of machine learning systems therefore is a challenging task. We propose an approach combining formal methods and metamorphic testing for improving the quality of machine learning systems. In particular, our framework enables the possibility of developing policing functions for runtime monitoring machine learning systems based on metamorphic relations.

12:00
Social Consequence Engines (Abstract)
SPEAKER: Marina De Vos

ABSTRACT. This abstract considers the problem of how autonomous systems might incorporate accounting for first (and higher order) norms in their own actions and those of others, through on-the-fly symbolic model-checking of (multiple) institutional models.

11:00-12:30 Session 127Q: Program termination and orderings (I) (WST)
11:00
Procedure-Modular Termination Analysis

ABSTRACT. Non-termination is the root cause of a variety of program bugs, such as hanging programs and denial-of-service vulnerabilities. This makes an automated analysis that can prove the absence of such bugs highly desirable. To scale termination checks to large systems, an interprocedural termination analysis seems essential. This is a largely unexplored area of research in termination analysis, where most effort has focused on small but difficult single-procedure problems.

We present a modular termination analysis for C programs using template-based interprocedural summarisation. Our analysis combines a context-sensitive, over-approximating forward analysis with the inference of under-approximating preconditions for termination. Bit-precise termination arguments are synthesised over lexicographic linear ranking function templates. Our experimental results show the advantage of interprocedural reasoning over monolithic analysis in terms of efficiency, while retaining comparable precision.

11:30
GPO: A Path Ordering for Graphs

ABSTRACT. We generalize term rewriting techniques to finite, directed, ordered multigraphs. We define well-founded monotonic graph rewrite orderings inspired by the recursive path ordering on terms. Our graph path ordering provides a building block for rewriting with such graphs, which should impact the many areas in which computations take place on graphs.

12:00
Objective and Subjective Specifications

ABSTRACT. We examine specifications for dependence on the agent that performs them. We look at the consequences for the Church-Turing Thesis and for the Halting Problem.

12:30-14:00Lunch Break
14:00-15:00 Session 128A: ARQNL Invited Talk: Giles Reger (ARQNL)
Chair:
14:00
Going Beyond First-Order Logic with Vampire

ABSTRACT. TBA

14:00-15:30 Session 128B (ASPOCP)
14:00
Towards Abstraction in ASP with an Application on Reasoning about Agent Policies

ABSTRACT. ASP programs are a convenient tool for problem solving, whereas with large problem instances the size of the state space can be prohibitive. We consider abstraction as a means of over-approximation and introduce a method to automatically abstract (possibly non-ground) ASP programs that preserves their structure, while reducing the size of the problem. One particular application case is the problem of defining declarative policies for reactive agents and reasoning about them, which we illustrate on examples.

14:30
Datalog for Static Analysis Involving Logical Formulae

ABSTRACT. Datalog has become a popular language for writing static analyses. Because Datalog is very limited, some implementations of Datalog for static analysis have extended it with new language features. However, even with these features it is hard or impossible to express a large class of analyses because they use logical formulae to represent program state. FormuLog fills this gap by extending Datalog to represent, manipulate, and reason about logical formulae. We have used FormuLog to implement declarative versions of symbolic execution and abstract model checking, analyses previously out of the scope of Datalog-based languages. While this paper focuses on the design of FormuLog and one of the analyses we have implemented in it, it also touches on a prototype implementation of the language and identifies performance optimizations that we believe will be necessary to scale FormuLog to real-world static analysis problems.

15:00
Syntactic Conditions for Antichain Property in Consistency Restoring Prolog

ABSTRACT. We study syntactic conditions which guarantee when a CR-Prolog (Consistency Restoring Prolog) program has antichain property: no answer set is a proper subset of another. A notable such condition is that the program's dependency graph being acyclic and having no directed path from one cr-rule head literal to another.

14:00-15:30 Session 128C: Invited talk: Michael Emmi (AVOCS)

Automatic Verification of Concurrent Objects

14:00
Automatic Verification of Concurrent Objects

ABSTRACT. Efficient implementations of abstract data types (ADTs) like collections (e.g., key-value stores, sets queues) and synchronization (e.g., signals, semaphores, locks) are essential to modern computing. Programming such objects is however error prone: in minimizing the synchronization overhead among concurrent invocations, one risks violating consistency with their ADTs. Developing automatic validation capable of mitigating such risks is also challenging. Compared to typical safety properties like assertion validity, which are easily verified in linear time per program execution, the runtime verification of consistency generally requires exponential time. Static consistency verification is generally undecidable without bounding the number of concurrent operations, whereas deciding assertion validity only requires bounding implementation state.

In recent works, we have developed scalable and effective methodologies for the specification and verification of concurrent objects. Exploiting the algebraic structure of common ADTs, we demonstrate improved theoretical limits for verification, effective approximations, and practical algorithms for inferring specifications and verifying implementations. Besides enabling tractable verification for many concurrent objects, our work has uncovered novel symbolic characterizations of consistency (e.g., linearizability) checking, which was previously thought to require explicit exponential enumeration (e.g., of possible linearizations).

14:00-15:30 Session 128D (ICLP-DC)
Chair:
Location: Maths L6
14:00
Scalable Robotic Intra-Logistics with Answer Set Programming

ABSTRACT. Over time, Answer Set Programming (ASP) has gained traction as a versatile logic programming semantics with performant processing systems, used by a growing number of significant applications in academia and industry. However, this development is threatened by a lack of commonly accepted design patterns and techniques for ASP to address dynamic application on a real-world scale. To this end, we identified robotic intra-logistics as representative scenario, a major domain of interest in the context of the fourth industrial revolution. For this setting, we aim to provide a scalable and efficient ASP-based solutions by (1) stipulating a standardized test and benchmark framework; (2) leveraging existing ASP techniques through new design patterns; and (3) extending ASP with new functionalities. In this paper we will expand on the subject matter as well as detail our current progress and future plans.

14:30
Translating P-log, LP^{MLN}, LPOD, and CR-Prolog2 into Standard Answer Set Programs

ABSTRACT. Answer set programming (ASP) is a particularly useful approach for nonmonotonic reasoning in knowledge representation. In order to handle quantitative and qualitative reasoning, a number of different extensions of ASP have been invented, such as quantitative extensions LP^{MLN} and P-log, and qualitative extensions LPOD, and CR-Prolog2.

Although each of these formalisms introduced some new and unique concepts, we present reductions of each of these languages into the standard ASP language, which not only gives us an alternative insight into the semantics of these extensions in terms of the standard ASP language, but also shows that the standard ASP is capable of representing quantitative uncertainty and qualitative uncertainty. What's more, our translations yield a way to tune the semantics of LPOD and CR-Prolog2. Since the semantics of each formalism is represented in ASP rules, we can modify their semantics by modifying the corresponding ASP rules.

For future work, we plan to create a new formalism that is capable of representing quantitative and qualitative uncertainty at the same time. Since LPOD rules are simple and informative, we will first try to include quantitative preference into LPOD by adding the concept of weights, and tune the semantics of LPOD by modifying the translated standard ASP rules.

15:00
Speeding Up Lazy-Grounding Answer Set Solving

ABSTRACT. The grounding bottleneck is an important open issue in Answer Set Programming. Lazy grounding addresses it by interleaving grounding and search. The performance of current lazy-grounding solvers is not yet comparable to that of ground-and-solve systems, however. The aim of this thesis is to extend prior work on lazy grounding by novel heuristics and other techniques like non-ground conflict learning in order to speed up solving. Parts of expected results will be beneficial for ground-and-solve systems as well.

14:00-15:30 Session 128E: Challenge Solutions and Constraint Solving (LPOP)
14:00
Panel: Practice of Modeling and Programming
14:30
Invited Talk: A Modeling Language Based on Semantic Typing

ABSTRACT. A growing trend in modeling is the construction of high-level modeling languages that invoke a suite of solvers. This requires automatic reformulation of parts of the problem to suit different solvers, a process that typically introduces many auxiliary variables. We show how semantic typing can manage relationships between variables created by different parts of the problem. These relationships must be revealed to the solvers if efficient solution is to be possible. The key is to view variables as defined by predicates, and declaration of variables as analogous to querying a relational database that instantiates the predicates. The modeling language that results is self-documenting and self-checks for a number of modeling errors.

15:10
A Picat-based XCSP Solver - from Parsing, Modeling, to SAT Encoding
SPEAKER: Neng-Fa Zhou

ABSTRACT. This document gives an overview of a Picat-based XCSP3 solver, named PicatSAT, submitted to the 2018 XCSP competition. The solver demonstrates the strengths of Picat, a logic-based language, in parsing, modeling, and encoding constraints into SAT.

15:20
Confluence Analysis of Cognitive Models with Constraint Handling Rules

ABSTRACT. Computational cognitive modeling tries to explore cognition through developing detailed, process-based understanding by specifying corresponding computational models.

Currently, computational cognitive modeling architectures as well as the implementations of cognitive models are typically ad-hoc constructs. They lack a formalization from the computer science point of view. This impedes analysis of the models.

In this work, we present how cognitive models can be formalized and analyzed with the help of logic programming in form of Constraint Handling Rules (CHR). We concentrate on confluence analysis of cognitive models in the popular cognitive architecture Adaptive Control of Thought -- Rational (ACT-R).

14:00-15:30 Session 128F: Frameworks (LaSh)
14:00
Declarative Dynamic Programming with Inverse Coupled Rewrite Systems

ABSTRACT. Dynamic Programming solves optimization problems over an exponential search space in polynomial time. Bellman's Principle of Optimality is its mathematical prerequisite. Recursion and tabulation of intermediates are its essential ingredients to achieve efficiency. While these basic principles are well- understood, still such algorithms are intricate to develop and hopeless to debug. An optimal solution to a dynamic programming (DP) problem is identified by its optimal score. Recording the function calls which compute a score value gives us an uninterpreted term that can be understood as data structure representing the solution – and the algebra of all such terms (optimal or not) represents the search space for the given problem input. With ICOREs, we describe the search universe of a DP problem by a language of abstract terms. A set of rewrite rules relates the solutions to the problem input for which they constitute the search space. An evaluation algebra collapses solutions into their associated scores. A choice function operates on scores to identify optimal solutions. The issues of search, scoring and optimization are prefectly separated. These four constituents specify a DP problem in a perfectly declarative way. All we need now is a compiler to construct code that computes the inverse of the rewrite relation (starting from the input), evaluates and tabulates partial solutions, and applies Bellman's optimization rule along the way.

15:00
A Logic of Information Flows

ABSTRACT. We describe a formalism for combining heterogeneous components -- web services, knowledge bases, declarative specifications such as Integer Linear Programs, Constraint Satisfaction Problems, Answer Set Programs etc.. The formalism is a family of logics, where atomic modules -- formally, classes of structures -- are combined using operations of extended Relational algebra, or, equivalently, first-order logic with a least fixed point construct. Inputs and outputs of atomic modules indicate directionality of the information flows. As a result of this small addition, an interesting modal logic, similar to Dynamic Logic, is obtained. Many binary operations, including those studied in the calculi of binary relations and the standard constructs of imperative programming become definable. We study the properties of this logic and identify an efficient fragment where the main computational task is solvable in deterministic polynomial time.

14:00-15:30 Session 128G (MLP)
Location: Blavatnik LT1
14:00
Neural Meta Program Synthesis

ABSTRACT. The key to attaining general artificial intelligence is to develop architectures that are capable of learning complex algorithmic behaviors modeled as programs. The ability to learn programs can allow these architectures to learn to compose high-level abstractions that can lead to many benefits: i) enable neural architectures to perform more complex tasks, ii) learn interpretable representations (programs which can be analyzed, debugged, or modified), and iii) better generalization to new inputs (like algorithms). In this talk, I will present some of our recent work in developing neural architectures for learning programs from examples, and also briefly discuss other applications such as program repair and fuzzing that can benefit from such neural program representations.

14:45
Learning to Analyze Programs at Scale

ABSTRACT. I will present two new results on machine learning-based program analysis. The first direction involves learning static analyzers from a given dataset of programs and is based on counter-example guided synthesis, decision tree learning and adversarial perturbations. The second direction involves learning rules that pinpoint program issues (e.g., security violations), and is based on learning from large datasets of program changes by using semantic abstractions and hierarchical clustering. In both cases, I will show the methods successfully found issues missed by state-of-the-art, manually crafted systems.

14:00-15:30 Session 128H: Hybrid systems verification (NSV)
14:00
Verified Probabilistic Reachability in Parametric Hybrid Systems: Theory and Tool Implementation
14:30
Parallel reachability analysis of hybrid systems in XSpeed

ABSTRACT. Reachability analysis techniques are at the core of the current state-of-the-art technology for verifying safety properties of cyber-physical systems (CPS). In this talk, I will present a suite of parallel state-space exploration

algorithms in the tool XSpeed that, leveraging multi-core CPUs, enable to improve the performance of reachability

analysis of linear continuous and hybrid automaton models of CPS. A performance evaluation on several benchmarks

comparing their key performance indicators will be shown. This enables to identify the ideal algorithm and the

parameters to choose that would maximize the performances for a given benchmark.

15:00
The Policy Iterations Algorithm is Actually a Newton Method

ABSTRACT. The article highlights the closed link between the policy iterations algorithm and the Newton method.

The main different between them comes from the very special differential operator used for policy iterations

algorithm. We show that the policy iterations algorithm is sub-optimal in the sense that it does not fully exploit

the information provided by this special differential. Finally, we propose correlations between the hypothesis

that ensure the convergence of the algorithms.

14:00-15:30 Session 128I: Parallel SAT solving (PLR)

Parallel approaches to perform SAT solving.

Chair:
14:00
Tuning Parallel SAT Solvers
SPEAKER: Dirk Nowotka

ABSTRACT. ABSTRACT. In this paper we present new implementation details and benchmarking results for our parallel portfolio solver TOPO. In particular, we discuss ideas and implementation details for the exchange of learned clauses in a massively-parallel SAT solver which is designed to run more that 1,000 solver threads in parallel. Furthermore, we go back to the roots of portfolio SAT solving, and discuss the impact of diversifying the solver by using different restart-, branching- and clause database management heuristics. We show that these techniques can be used to tune the solver towards different problems. However, in a case study on formulas derived from Bounded Model Checking problems we see the best performance when using a rather simple clause exchange strategy. We show details of these tests and discuss possible explanations for this phenomenon.

As computing times on massively-parallel clusters are expensive, we consider it especially interesting to share these kind of experimental results.

14:45
GPU Accelerated SAT solving

ABSTRACT. Details will follow shortly.

14:00-15:30 Session 128J (REFINE)
Location: Blavatnik LT2
14:00
A more relaxed way to make concrete: uses of heuristic search to discover implementations
SPEAKER: John Clark

ABSTRACT. Over the past decades we have developed formal frameworks to refine specifications into more detailed representations. These handle both deterministic and probabilistic specifications. We also have developed means for relaxing formality when the occasion demands it (retrenchment).  In this talk I will explore approaches to inferring programs from test data (derived from a specification), particularly involving  the use of genetic programming (GP). I will pay particular attention to quantum artifacts, e.g. discussing how GP can find Shor's Quantum Discrete Fourier Transform circuits.  Such approaches generally work by 'homing in' on an appropriate program, using deviation of results from specified results as guidance. This can produce programs that are either right or nearly right. I'll do a round trip with evolutionary computation and indicate the use of GP in unearthing formal specification fragments from test traces for traditional programs (using mutation approaches to get rid of uninteresting potential invariants inferred).

14:00-15:30 Session 128K (SYNT)
Location: Maths LT3
14:00
Scalable Synthesis with Symbolic Syntax Graphs
SPEAKER: Sumith Kulal

ABSTRACT. General-purpose program synthesizers face a tradeoff between having a rich vocabulary for output programs and the time taken to discover a solution. One performance bottleneck is the construction of a space of possible output programs that is both expressive and easy to search. In this paper we achieve both richness and scalability using a new algorithm for constructing symbolic syntax graphs out of easily specified components to represent the space of output programs. Our algorithm ensures that any program in the space is type-safe and only mutates values that are explicitly marked as mutable. It also shares structure where possible and encodes programs in a straight-line format instead of the typical bushy-tree format, which gives an inductive bias towards realistic programs. These optimizations shrink the size of the space of programs, leading to more efficient synthesis, without sacrificing expressiveness. We demonstrate the utility of our algorithm by implementing \syncro, a system for synthesis of incremental operations. We evaluate our algorithm on a suite of benchmarks and show that it performs significantly better than prior work.

14:18
On Inductive Verification and Synthesis
SPEAKER: Dennis Peuter

ABSTRACT. We study possibilities of using symbol elimination in program verification and synthesis. We consider programs in which some instructions or subprograms are not fully specified; the task is to derive conditions on the parameters (or subprograms) which imply inductivity of certain properties. We then propose a method for property-directed invariant generation and analyze its properties.

14:36
Programming by example: efficient, but not “helpful”

ABSTRACT. Programming by example (PBE) is a powerful programming paradigm based on example driven synthesis. Users can provide examples, and a tool automatically constructs a program that satisfies the examples. To investigate the impact of PBE on real-world users, we built a study around StriSynth, a tool for shell scripting by example, and recruited 27 working IT professionals to participate. In our study, we asked the users to complete three tasks with StriSynth, and the same three tasks with PowerShell, a traditional scripting language. We found that, although our participants completed the tasks more quickly with StriSynth, they reported that they believed PowerShell to be a more helpful tool.

14:54
Minimal Synthesis of String To String Functions From Examples

ABSTRACT. We study the problem of synthesizing string to string transformations from a set of input/output examples. The transformations we consider are expressed using deterministic finite automata (DFA) that read pairs of letters, one letter from the input and one from the output. The DFA corresponding to these transformations have additional constraints, ensuring that each input string is mapped to exactly one output string.

We suggest that, given a set of input/output examples, the smallest DFA consistent with the examples is a good candidate for the transformation the user was expecting. We therefore study the problem of, given a set of examples, finding a minimal DFA consistent with the examples and satisfying the functionality and totality constraints mentioned above.

We prove that, in general, this problem (the corresponding decision problem) is NP-complete. This is unlike the standard DFA minimization problem which can be solved in polynomial time. We provide several NP-hardness proofs that show the hardness of multiple (independent) variants of the problem.

Finally, we propose an algorithm for finding the minimal DFA consistent with input/output examples, that uses a reduction to SMT solvers. We implemented the algorithm, and used it to evaluate the likelihood that the minimal DFA indeed corresponds to the DFA expected by the user.

14:00-15:30 Session 128L (TLA)
Location: Maths Boardroom
14:00
Applying TLA+ in a Safety-Critical Railway Project

ABSTRACT. The presentation gives an overview on the experience and insights gained from using TLA+ and PlusCal to model and develop fault-tolerant and safety-critical modules for TAS Control Platform, a platform for railway control applications up to safety integrity level 4.

14:30
State Space Explosion or: How To Fight An Uphill Battle

ABSTRACT. In this tutorial we will explore the tricks and techniques available in TLA+, TLC and the TLA Toolbox to squeeze out more performance to check models of interesting sizes despite the state space explosion problem. The tutorial will also shed light on what has been done under the hood so far to scale TLC to modern day hardware and what we are up to next to tackle the state space explosion challenge.

14:00-15:30 Session 128M (ThEdu)
14:00
Towards intuitive reasoning in axiomatic geometry

ABSTRACT. Proving lemmas in synthetic geometry is an often time-consuming endeavour - many intermediate lemmas need to be proven before interesting results can be obtained. Automated theorem provers (ATP) made much progress in the recent years and can prove many of these intermediate lemmas automatically. The interactive theorem prover Elfe accepts mathematical texts written in fair English and verifies the text with the help of ATP. Geometrical texts can thereby easily be formalized in Elfe, leaving only the cornerstones of a proof to be derived by the user. This allows for teaching axiomatic geometry to students without prior experience in formal mathematics.

14:30
Proving in the Isabelle Proof Assistant that the Set of Real Numbers is not Countable

ABSTRACT. We present a new succinct proof of the uncountability of the real numbers – optimized for clarity – based on the proof by Benjamin Porter in the Isabelle Analysis theory.

14:00-15:30 Session 128N: Certification & Formalisation I (VSTTE)

Certification & Formalisation I

Location: Maths LT2
14:00
Verified Software: Theories, Tools, ... and Engineering

ABSTRACT. Continual innovation of software verification theories & tools is
essential in order to meet the challenges of ever-more complex
software-intensive systems. But achieving impact ultimately requires
an understanding of the engineering context in which the tools will be
deployed.  Based on our tried-and-trusted methods of high-integrity
software development at Altran, I will identify key features of the
industrial landscape in which software verification tools have to
operate, and some of the pitfalls that can stop them being adopted,
including regulation, qualification, scalability, cost justification,
and the overall tool ecosystem. Within this context I will present
Altran's own on-going research & development activities in verified
software technologies. The talk will conclude by drawing some key
lessons that can be applied to avoid the traps and pitfalls that tools
encounter on their journey to succesful deployment.

15:00
Verified Certificate Checking for Counting Votes

ABSTRACT. We introduce a new framework for verifying electronic vote counting results that are based on the Single Transferable Vote scheme (STV). Our approach frames electronic vote counting as certified computation where each execution of the counting algorithm is accompanied by a certificate that witnesses the correctness of the output. These certificates are then checked for correctness independently of how they are produced. We advocate to verify the verifier rather than the soft- ware used to produce the result. We use the theorem prover HOL to formalise the STV vote counting scheme, and obtain a fully verified certificate checker. By connecting HOL with the verified CakeML compiler, we then extract an executable that is guaranteed to behave correctly with respect to the formal specification of the protocol down to machine level. We demonstrate that our verifier can check certificates of real-size elections efficiently. Our encoding is modular, so repeating the same pro- cess for another different STV scheme would require a minimal amount of additional work.

14:00-15:30 Session 128P: Invited Speaker; Autonomy (VaVAS)

Invited Speaker

Contributed Talks: Autonomy

Location: Maths LT1
14:00
Trust me I am autonomous

ABSTRACT. Developing advanced robotics and autonomous applications is now facing the confidence issue for their acceptability in everyday life. This confidence could be justified by the use of dependability techniques as it is done in other safety-critical applications. However, due to specific robotic properties (such as continuous physical interaction or non-deterministic decisional layer), many techniques need to be adapted or revised. This presentation will introduce these major issues for autonomous systems, and focus on current research work at LAAS in France, on model-based risk analysis for physical human-robot interactions, active safety monitoring for autonomous systems, and testing in simulation of mobile robot navigation.

15:00
Policy Generation with Probabilistic Guarantees for Long-term Autonomy of a Mobile Service Robot
SPEAKER: Nick Hawes

ABSTRACT. In this demo, we will illustrate our work on integrating formal verification techniques, in particular probabilistic model checking, to enable long term deployments of mobile service robots in everyday environments. Our framework is based on generating policies for Markov decision process (MDP) models of mobile robots, using (co-safe) linear temporal logic specifications. More specifically, we build MDP models of robot navigation and action execution where the probability of successfully navigating between two locations and the expected time to do so are learnt from experience. For a specification over these models, we maximise the probability of the robot satisfying it, and minimise the expected time to do so. The policy obtained for this objective can be seen as a robot plan with attached probabilistic performance guarantees.

Our proposal is to showcase this framework live during the workshop, deploying our robot in the workshop venue and having it perform tasks throughout the day (the robot is based in the Oxford Robotics Institute, hence it can be easily moved to the workshop venue). In conjunction with showing the live robot behaviour, we will, among other things, provide visualisation of the generated policies on a map of the environment; showcase how the robot keeps track of the performance guarantees calculated offline during policy execution; and show how these guarantees can be used for execution monitoring.

14:00-15:30 Session 128Q: Program termination and orderings (II) (WST)
14:00
Comparing on Strings: Semantic Kachinuki Order
SPEAKER: Alfons Geser

ABSTRACT. We present an extension of the Kachinuki order on strings. The Kachinuki order transforms the problem of comparing strings to the problem of comparing their syllables length-lexicographically, where the syllables are defined via a precedence on the alphabet. Our extension allows the number of syllables to increase under rewriting, provided we bound it by a weakly compatible interpretation.

14:30
Embracing Infinity - Termination of String Rewriting by Almost Linear Weight Functions

ABSTRACT. Weight functions are among the simplest methods for proving termination of string rewrite systems, and of rather limited applicability. In this working paper, we propose a generalized approach. As a first step, syllable decomposition yields a transformed, typically infinite rewrite system over an infinite alphabet, as the title indicates. Combined with generalized weight functions, termination proofs become feasible also for systems that are not necessarily simply terminating. The method is limited to systems with linear derivational complexity, however, and this working paper is restricted to original alphabets of size two. The proof principle is almost self-explanatory, and if successful, produces simple proofs with short proof certificates, often even shorter than the problem instance. A prototype implementation was used to produce nontrivial examples.

15:00
Well-founded models in proofs of termination

ABSTRACT. We prove that operational termination of declarative programs can be characterized by means of well-founded relations between specific formulas which can be obtained from the program. We show how to generate such relations by means of logical models where the interpretation of some binary predicates are required to be well-founded relations. Such logical models can be automatically generated by using existing tools. This provides a basis for the implementation of tools for automatically proving operational termination of declarative programs.

15:00-15:30 Session 129: ARQNL Regular Papers 3 (ARQNL)
15:00
Pseudo-Propositional Logic

ABSTRACT. Propositional logic is the main ingredient used to build up SAT solvers which have gradually become powerful tools to solve a variety of important and complicated problems such as planning, scheduling, and verifications. However further uses of these solvers are subject to the resulting complexity of transforming counting constraints into conjunctive normal form (CNF). This transformation leads, generally, to a substantial increase in the number of variables and clauses, due to the limitation of the expressive power of propositional logic. To overcome this drawback, this work extends the alphabet of propositional logic by including the natural numbers as a means of counting and adjusts the underlying language accordingly. The resulting representational formalism, called pseudo-propositional logic, can be viewed as a generalization of propositional logic where counting constraints are naturally formulated, and the generalized inference rules can be as easily applied and implemented as arithmetic.

15:30-16:00Coffee Break
16:00-17:30 Session 130A: ARQNL Presentation only & System demos (ARQNL)
16:00
Formalization of a Paraconsistent Infinite-Valued Logic

ABSTRACT. Classical logics are explosive -- from a contradiction everything follows. This is problematic e.g. when reasoning about contradictory evidence. In paraconsistent logics everything does not follow from a contradiction. In this paper, formalized proofs of two meta-theorems about a propositional fragment of a paraconsistent infinite-valued higher-order logic are presented. One implies that the validity of any formula can be decided by considering a finite number of truth values and evaluating the formula in all models over these. The other implies that there is no upper bound on the size of this finite set -- it depends on the number of propositional symbols in the formula.

16:30
System Demonstration: The Higher-Order Prover Leo-III

ABSTRACT. The higher-order theorem prover Leo-III will be demonstrated.

16:00-18:00 Session 130B: AVoCS Regular Papers 2 (AVOCS)
16:00
Analyzing Consistency of Formal Requirements

ABSTRACT. In the development of safety-critical embedded systems, requirements-driven approaches are widely used. Expressing functional requirements in formal languages enables reasoning and formal testing. This paper proposes the Simplified Universal Pattern (SUP) as an easy to use formalism and compares it to SPS, another commonly used specification pattern system. Consistency is an important property of requirements that can be checked already in early design phases. However, formal definitions of consistency are rare in literature and tent to be either too weak or computationally too complex to be applicable to industrial systems. Therefor this work proposes a new formal consistency notion, called partial consistency, for the SUP that is a trade-off between exhaustiveness and complexity. Partial consistency identifies critical cases and verifies if these cause conflicts between requirements.

16:30
Formal Verification of Synchronisation, Gossip and Environmental Effects for Wireless Sensor Networks
SPEAKER: Matt Webster

ABSTRACT. The Internet of Things (IoT) promises a revolution in the monitoring and control of a wide range of applications, from urban water supply networks and precision agriculture food production, to vehicle connectivity and healthcare monitoring. For applications in such critical areas, control software and protocols for IoT systems must be verified to be both robust and reliable. Two of the largest obstacles to robustness and reliability in IoT systems are effects on the hardware caused by environmental conditions, and the choice of parameters used by the protocol. In this paper we use probabilistic model checking to verify that a synchronisation and dissemination protocol for Wireless Sensor Networks (WSNs) is correct with respect to its requirements, and is not adversely affected by the environment. We show how the protocol can be converted into a logical model and then analysed using the probabilistic model-checker, PRISM. Using this approach we prove under which circumstances the protocol is guaranteed to synchronise all nodes and disseminate new information to all nodes. We also examine the bounds on synchronisation as the environment changes the performance of the hardware clock, and investigate the scalability constraints of this approach.

17:00
Writing a Model Checker in 80 Days: Reusable Libraries and Custom Implementation

ABSTRACT. During a course on model checking we developed BMoth, a full-stack model checker for classical B, featuring both explicit-state and symbolic model checking. Given that we only had a single university term to finish the project, a particular focus was on reusing existing libraries to reduce implementation workload.

In the following, we report on a selection of reusable libraries, which can be combined into a prototypical model checker relatively easily. Additionally, we discuss where custom code depending on the specification language to be checked is needed and where further optimization can take place. To conclude, we compare to other model checkers for classical B.

16:00-18:00 Session 130C (ICLP-DC)
Chair:
Location: Maths L6
16:00
Model Revision of Logical Regulatory Networks using Logic-based Tools

ABSTRACT. Recently, biological data has been increasingly produced calling for the existence of computational models able to organize and computationally reproduce existing observations. In particular, biological regulatory networks have been modeled relying on the Sign Consistency Model or the logical formalism. However, their construction still completely relies on a domain expert to choose the best functions for every network component. Due to the number of possible functions for k arguments, this is typically a process prone to error. Here, we propose to assist the modeler using logic-based tools to verify the model, identifying crucial network components responsible for model inconsistency. We intend to obtain a model building procedure capable of providing the modeler with repaired models satisfying a set of pre-defined criteria, therefore minimizing possible modeling errors.

16:00-18:00 Session 130D: Logic and Constraints in Applications (LPOP)
16:00
Invited Talk: The Young Software Engineer’s Guide to Using Formal Methods

ABSTRACT. If programming was ever a hermitlike activity, those days are in the past. Like other internet-aided social processes, software engineers connect and learn online. Open-source repositories exemplify common coding patterns and best practices, videos and interactive tutorials teach foundations and pass on insight, and online forums invite and answer technical questions. These knowledge-sharing facilities make it easier for engineers to pick up new techniques, coding practices, languages, and libraries. This is good news in a world where software quality is as important as ever, where logic specification can be used to declare intent, and where formal verification tools have become practically feasible.

In this talk, I give one view of the future of software engineering, especially with an eye toward software quality. I will survey some techniques, look at the history of tools, and inspire with some examples of what can be daily routine in the lives of next-generation software engineers.

16:40
How to upgrade ASP for true dynamic modelling and solving?

ABSTRACT. The world is dynamic, and ASP is not! This is a provocative way to say that ASP is not up to dealing with many complex real-world applications having a dynamic nature, let alone transitions over states, not even mentioning more fine-grained temporal structures.

16:50
A Rule-Based Tool for Analysis and Generation of Graphs Applied to Mason's Marks

ABSTRACT. We are developing a rule-based implementation of a tool to analyse and generate graphs. It is used in the domain of mason's marks. For thousands of years, stonemasons have been inscribing these symbolic signs on dressed stone. Geometrically, mason's marks are line drawings. They consist of a pattern of straight lines, sometimes circles and arcs. We represent mason's marks by connected planar graphs.

Our prototype tool for analysis and generation of graphs is implemented in the rule-based declarative language Constraint Handling Rules (CHR). It features

- a vertex-centric logical graph representation as constraints, - derivation of properties and statistics from graphs, - recognition of (sub)graphs and patterns in a graph, - automatic generation of graphs from given constrained subgraphs, - drawing graphs by visualization using svg graphics

In particular, we started to use the tool to classify and to invent mason's marks. In principe, our tool can be applied to any problem domain that admits a modeling as graphs. The drawing and generation module of our tool is available online at http://chr.informatik.uni-ulm.de/mason.

17:00
A software system should be declarative except where it interacts with the real world

ABSTRACT. We propose a system design principle that explains how to use declarative programming (logic and functional) together with imperative programming. The advantages of declarative programming are well known; they include ease of analysis, verification, testing, optimization, maintenance, upgrading, and distributed implementation. We will not elaborate on these advantages here, but rather focus on what part of the software system should be written declaratively. Declarative programming cannot interface directly with the real world since it does not support common real-world concepts such as physical time, named state, and nondeterminism. Other programming paradigms that support these concepts must therefore be used, such as imperative programming (which contains named state). To optimize the system design, we propose that real-world concepts should only be used where they are needed, namely where the system interfaces with the real world. It follows that a software system should be built completely declaratively except where it interfaces with the real world. We motivate this principle with examples from our research and we give it a precise formal definition.

17:10
Questions about logic and constraints in real-world applications
17:20
Panel: Future of Programming with Logic and Knowledge
17:50
Closing
SPEAKER: Tuncay Tekle
16:00-18:00 Session 130E: Modelling and Solving (LaSh)
16:00
ESSENCE, A Language for Specifying Combinatorial Problems: What, Why and So What?

ABSTRACT. ESSENCE is a formal language for specifying combinatorial (decision or optimisation) problems at a high level of abstraction. It is the result of our attempt to design a formal language that enables abstract problem specifications that are similar to rigorous specifications that use a mixture of natural language and discrete mathematics, such as those that appear in Garey and Johnson's catalog of NP-complete problems. This talk presents the design of ESSENCE - focusing on its abstraction features - the motivation behind it and why it can be useful to the wider LaSh community.

16:25
Automated Constraint Modelling with Conjure

ABSTRACT. When solving a combinatorial problem, the formulation or model of the problem is critical to the efficiency of the solver. Automating the modelling process has long been of interest because of the expertise and time required to produce an effective model of a given problem. I describe a method to automatically produce constraint models from a problem specification written in the abstract constraint specification language Essence. The approach is to incrementally refine the specification into a concrete model by applying a chosen refinement rule at each step. Any non-trivial specification may be refined in multiple ways, creating a space of models from which to choose.

The handling of symmetries is a particularly important aspect of automated modelling. Many combinatorial optimisation problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: modelling symmetries. I show how modelling symmetries may be broken automatically as they enter a model during refinement, obviating the need for an expensive symmetry detection step following model formulation.

This approach is implemented in a system called Conjure. The models produced by Conjure are compared to hand-written constraint models from the literature that are known to be effective. Empirical results confirm that Conjure can reproduce successfully the kernels of the constraint models of several benchmark problems found in the literature.

 

16:50
Solving #SAT by Parameterized Algorithms: Exploiting Small Treewidth

ABSTRACT. In this talk, we first give an overview on solving #SAT using parameterized algorithms that exploit small treewidth. The problem #SAT asks to output the number of solutions of a Boolean formula. If the treewidth of a graph representation of the given formula is fixed, one can solve #SAT efficiently by means of a dynamic programming algorithm on tree decompositions.

However, finding an optimal tree decomposition is itself an NP-hard problem. Existing methods for finding tree decompositions of small width either (a) yield optimal tree decompositions but are applicable only to small instances or (b) are based on greedy heuristics which often yield tree decompositions that are far from optimal. We propose a new method that combines (a) and (b), where a heuristically obtained tree decomposition is improved locally. We sketch results of an experimental evaluation of our new method.

Further, we present a novel architecture of dynamic programming for efficient parallel execution on the GPU. We provide thorough experiments and compare the runtime of our system with state-of-the-art solvers. Our results are encouraging if instances have treewidth at most 30, which is the case for the majority of counting benchmark instances.

17:15
Decidable Linear Tree Constraints

ABSTRACT. Linear constraints on infinite trees arise in object-oriented resource analysis, when inferring resource types for programs similar to (a fragment of) Java. These types encode the heap space usage of programs, and one can calculate an upper bound on it using the constraint solutions. I will explain the logic and the meaning of the constraints with examples and briefly describe their role in program analysis. I then focus on the decision procedures for both list and tree constraints. After presenting the ideas for the list case, I will generalize them to trees and point out the difficulties that appear there and how they are treated.

16:00-18:00 Session 130F (MLP)
Location: Blavatnik LT1
16:00
DeepBugs: A Learning Approach to Name-based Bug Detection

ABSTRACT. Natural language elements in source code, e.g., the names of variables and functions, convey useful information. However, most existing bug detection tools ignore this information and therefore miss some classes of bugs. The few existing name-based bug detection approaches reason about names on a syntactic level and rely on manually designed and tuned algorithms to detect bugs. This talk presents DeepBugs, a learning approach to name-based bug detection, which reasons about names based on a semantic representation and which automatically learns bug detectors instead of manually writing them. We formulate bug detection as a binary classification problem and train a classifier that distinguishes correct from incorrect code. To address the challenge that effectively learning a bug detector requires examples of both correct and incorrect code, we create likely incorrect code examples from an existing corpus of code through simple code transformations. A novel insight learned from our work is that learning from artificially seeded bugs yields bug detectors that are effective at finding bugs in real-world code. We implement our idea into a framework for learning-based and name-based bug detection. Three bug detectors built on top of the framework detect accidentally swapped function arguments, incorrect binary operators, and incorrect operands in binary operations. Applying the approach to a corpus of 150,000 JavaScript files yields bug detectors that have a high accuracy (between 89% and 95%), are very efficient (less than 20 milliseconds per analyzed file), and reveal 102 programming mistakes (with 68% true positive rate) in real-world code.

16:45
Measuring software development productivity: a machine learning approach
SPEAKER: Ian Wright

ABSTRACT. We apply machine learning to version control data to measure software development productivity. Our models measure both the quantity and quality of produced code. Quantity is defined by a model that predicts the labor hours supplied by the `standard coder’ to make any code change, and quality is defined by a model that predicts the distribution of different kinds of problems identified by a static code analysis tool.

17:15
Answering Cloze-style Software Questions Using Stack Overflow
SPEAKER: Ezra Winston

ABSTRACT. Modern Question Answering (QA) systems rely on both knowledge bases (KBs) and unstructured text corpora as sources for their answers. KBs, when available, generally offer more precise answers than unstructured text. However, in specialized domains such as software engineering, QA requires deep domain expertise and KBs are often lacking. In this paper we tackle such specialized QA by using both text and semi-structured knowledge, in the form of a corpus of entity-labeled documents. We propose CASE, a hybrid of an RNN language model and an entity co-occurrence model, where the entity co-occurrence model is learned from the entity-labeled corpus. On QUASAR-S, a dataset derived from Stack Overflow consisting of Cloze (fill-in-the-blank) software questions and a corpus of tagged posts, CASE shows large accuracy gains over strong baselines.

16:00-18:00 Session 130H: Parallel parity game solving (PLR)

On parallel algorithms to solve parity games.

 

Chair:
16:00
Using work-stealing to parallelize symbolic algorithms and parity game solvers

ABSTRACT. For multi-core computers, an important paradigm for parallel execution is task-based or fork-join parallelism. Typically this is implemented using work-stealing. This paradigm is a good fit for algorithms that contain recursion, but is also suitable in other contexts, for example the load-balancing of parallel computations on arrays. We apply work-stealing in several verification contexts. We parallelize operations on binary decision diagrams and on verification tools that use binary decision diagrams, where we apply work-stealing both on the low level of the individual operations and on the higher level of the search algorithms. We parallelize parity game solvers in the following two ways. We use work-stealing to parallelize backward search for attractor computation. We also use work-stealing to parallelize all steps of the strategy improvement algorithm. In these applications, using work-stealing is necessary but not sufficient to obtain a good performance. We must also avoid locking techniques and instead use lock-free techniques for scalable performance. We use lock-free techniques not only for the parallelized algorithms but also to implement the scalable work-stealing framework Lace with high multi-core scaling.

16:45
An Efficient Zielonka's Algorithm for Parallel Parity Games

ABSTRACT. Parity games are abstract infinite-duration two-player games, widely studied in computer science. Several solution algorithms have been proposed and also implemented in the community tool of choice called PGSolver, which has declared the Zielonka Recursive (ZR) algorithm the best performing on randomly generated games. With the aim of scaling and solving wider classes of parity games, several improvements and optimizations have been proposed over the existing algorithms. However, no one has yet explored the benefit of using the full computational power of which even common modern multicore processors are capable of. This is even more surprisingly by considering that most of the advanced algorithms in PGSolver are sequential. In this work we introduce and implement, on a multicore architecture, a parallel version of the Attractor algorithm, that is the main kernel of the ZR algorithm. This choice follows our investigation that more of the 99% of the execution time of the ZR algorithm is spent in this module.

16:00-18:00 Session 130I (REFINE)
Chair:
Location: Blavatnik LT2
16:00
Refining Santa: An Excercise in Efficient Synchronization

ABSTRACT. The Santa Claus Problem is an intricate exercise for concurrent programming. This paper outlines the refinement step to develop a highly efficient implementation with concurrent objects, starting from a very simple specification. The efficiency of the implementation is compared to three other languages.

16:30
a Theory of Lazy Imperative Timing

ABSTRACT. We present a theory of lazy imperative timing.

16:00-18:00 Session 130J (SYNT)
Location: Maths LT3
16:00
Towards correct-by-construction controller synthesis for self-driving cars

ABSTRACT. Can we design a provably-safe self-driving car? As the automobile evolves toward full autonomy, many driver convenience and safety automation systems are being introduced into production vehicles. Assuring the seamless and safe integration of each new automation function with existing control functions is a major challenge. The challenge gets exacerbated when considering the number of control and decision-making functionality required for a self-driving car. This talk presents recent results towards addressing this problem through the lens of formal methods and correct-by-construction controller synthesis. I will discuss three uses of controller synthesis: for composition, for supervision, and for falsification. Composition refers to the task of designing individual controllers for interacting subsystems and providing guarantees on their integrated operation. Supervision refers to the task of providing a safety envelope for a legacy controller or human drivers. Falsification refers to the task of algorithmically finding violations of specifications by given designs. I will use adaptive cruise control and lane keeping systems to demonstrate some of these ideas both in simulation, and also on a real vehicle in Mcity, the autonomous vehicle testing facility at the University of Michigan.

16:45
Competition Report: SYNTCOMP
17:00
Competition Report: SyGuS-COMP
17:15
Probabilistic and Interpretable Models of Code

ABSTRACT. I will present a method for building probabilistic models of code which combines techniques from synthesis, sampling and decision tree learning. The method scales to large datasets and is more precise and faster to train than state-of-the-art neural networks. The resulting models can be used for various tasks including statistical code synthesis, repair, debugging and translation. In the talk I will discuss what theoretical guarantees can be stated about these models, appropriate experimental results showing the models are applicable to various programming languages (e.g., Java, JavaScript, Python), and also discuss several open problems that span the intersection of program analysis, synthesis and machine learning.

16:00-18:00 Session 130K (TLA)
Location: Maths Boardroom
16:00
Invariants in Distributed Algorithms

ABSTRACT. We will discuss making invariants explicit in specification of distributed algorithms. Clearly this helps prove properties of distributed algorithms. More importantly, we show that this helps make it easier to express and to understand distributed algorithms at a high level, especially through direct uses of message histories. We will use example specifications in TLA+, for verification of Paxos using TLAPS, as well as complete excutable specifications in DistAlgo, a high-level language for distributed algorithms.

16:45
Proving properties of a minimal covering algorithm

ABSTRACT. This work concerns the specification and proof using TLA+ of properties of an algorithm for solving the minimal covering problem, which we have implemented in Python. Minimal covering is the problem of choosing a minimal number of elements from a given set to cover all the elements from another given set, as defined by a given function. The TLA+ modules are available online. The proofs of safety properties in these modules have been checked with the proof assistant TLAPS (version 1.4.3), in the presence of the tools Zenon, CVC4, Isabelle, and LS4. We have been motivated to study and implement this algorithm for converting binary decision diagrams to formulas of small size, so that humans can read the results of symbolic computation.

16:00-18:00 Session 130L (ThEdu)

ThEdu Business Meeting

16:00
ThEdu'18 business meeting

ABSTRACT. ThEdu'18 business meeting

ThEdu'18 post proceedings at EPTCS

ThEdu'19 - where, when and other organizational issues.

16:00-18:00 Session 130M: Certification & Formalisation II (VSTTE)

Certification & Formalisation II

Location: Maths LT2
16:00
Program Verification in the Presence of I/O: Semantics, verified library routines, and verified applications
SPEAKER: Hugo Férée

ABSTRACT. Software verification tools that build machine-checked proofs of functional correctness usually focus on the algorithmic content of the code. Their proofs are not grounded in a formal semantic model of the environment that the program runs in, or the program’s interaction with that environment. As a result, several layers of translation and wrapper code must be trusted. In contrast, the CakeML project focuses on end-to-end verification to replace this trusted code with verified code in a cost-effective manner. In this paper, we present infrastructure for developing and verifying impure functional programs with I/O and imperative file handling. Specifically, we extend CakeML with a low-level model of file I/O, and verify a high-level file I/O library in terms of the model. We use this library to develop and verify several Unix-style command-line utilities: cat, sort, grep, diff and patch. The workflow we present is built around the HOL4 theorem prover, and therefore all our results have machine-checked proofs.

16:30
TWAM: A Certifying Abstract Machine for Logic Programs

ABSTRACT. Type-preserving (or typed) compilation uses typing derivations to certify correctness properties of compilation. We have designed and implemented a type-preserving compiler for a simply-typed dialect of Prolog we call T-Prolog. The crux of our approach is a new certifying abstract machine which we call the Typed Warren Abstract Machine (TWAM). The TWAM has a dependent type system strong enough to specify the semantics of a logic program in the logical framework LF. We present a soundness metatheorem which constitutes a partial correctness guarantee: well-typed programs implement the logic program specified by their type. This metatheorem justifies our design and implementation of a certifying compiler from T-Prolog to TWAM.

17:00
A Java bytecode formalisation

ABSTRACT. This paper presents the first Coq formalisation of the full Java byte-code instruction set and its semantics. The set of instructions is organised in a hierarchy depending on how the instructions deal with the runtime structures of the Java Virtual Machine such as threads, stacks, heap etc. The hierarchical nature of Coq modules neatly reinforces this view and facilitates the understanding of the Java bytecode semantics. This approach makes it possible to both conduct verification of properties for programs and to prove metatheoretical results for the language. Based upon our formalisation experience, the deficiencies of the current informal bytecode language specification are discussed.

17:30
Formalising Executable Specifications of Low-Level Systems
SPEAKER: Paolo Torrini

ABSTRACT. Formal models of low-level applications rely often on the distinction between executable layer and underlying hardware abstraction. On one hand, the formal development of the executable specification can benefit from a language designed to support theorem proving in connection with the specific domain of the application. A deep embedding makes it possible to manipulate such a language in a direct way, e.g. in formally translating the executable specification to an efficient program. On the other hand, the abstract character of the hardware specification makes it desirable to take full advantage of the expressive power of the theorem prover, relying on a shallow embedding. Implementing deeply embedded languages that allow importing shallow specifications can be an answer to the problem of combining the two approaches. We use Coq to formalise such a deep embedding, which we use to support the formal development of an OS kernel. We take advantage of the Coq logic in specifying the deep embedding relationally, using structural operational semantics, then extracting an executable interpreter from the type soundness proof.

16:00-18:00 Session 130N: Autonomous Vehicles (VaVAS)

Contributed Talks: Autonomous Vehicles

Location: Maths LT1
16:00
Probabilistic Model Checking for Autonomous Agent Strategy Synthesis - Extended Abstract
SPEAKER: Alice Miller

ABSTRACT. We describe work that was presented at the NASA Formal Methods Symposium and published in the symposium proceedings. We give an overview of the probabilistic models that we presented for autonomous agent search and retrieve missions. We summarise how probabilistic model checking and the probabilistic model checker PRISM were used for optimal strategy generation with a view to returning the generated strategies to the UAV embedded within controller software.

16:30
Towards a verifiable decision making framework for self-driving vehicles

ABSTRACT. A new verification framework is presented for the decision making of autonomous vehicles (AVs). The overall structure of the framework consists of: (1) A perception system of sensors that feed into (2) a reasoning agent based on a Jason architecture that operates on-board an AV and interacts  with a model of the environment using (3) multi-input multi-output adaptive control system.  The agent relies on a set of rules, planners and actions to achieve its ultimate goal of driving the AV safely from a starting point to its destination. The verification framework deploys an innovative combination of two well known verification tools: (1) the model checker for multi-agent systems (MCMAS) to check the consistency and stability of the agent logic rules and perception-based beliefs before choosing any action and (2) the PRISM probabilistic model checker to provide the agent with the probability of success when it decides to take a planned movement sequence. The agent will select the movement-actions with the highest probability of success. The planned actions are executed by a control system to control the movement of the AV on the ground using a set of primitive movement skills using its actuators. The framework adopts the Jason agent paradigm to design the reasoning and the decision making process. The Robot Operating System (ROS) and the Gazebo Simulator are used to model the AV, its sensors and the surrounding environment. The agent connects to a MATLAB-based perception and control system to steer the AV.

17:00
Formalisation of the Rules of the Road for embedding into an Autonomous Vehicle Agent
SPEAKER: Gleifer Alves

ABSTRACT. We propose a description language for formalising the ``Rules of the Road" (e.g., the UK Highway Code). By way of example, we represent the subset of the Highway Code responsible for road junctions and represent these Rules of the Road using linear temporal logic, LTL. Our aim is to ``extract" from this generic representation a set of beliefs, goals and plans that can be used by a BDI agent to augment its universal autonomous vehicle control (e.g., route planning and obstacle avoidance capabilities) with region-specific rules for road use. We intend to target the Gwendolen agent programming language as a prototype for this system, which provides formal verification possibilities.

16:00-18:00 Session 130P: Termination Competition (report and short talks) (WST)

TERMCOMP 2018 - Report

Presentation of tool papers:

  • M. Brockschmidt, S. Dollase, F. Emrich, F. Frohn, C. Fuhs, J. Giesl, M. Hark, J. Hensel, D. Korzeniewski, M. Naaf, T. Ströder. AProVE at the Termination Competition 2018
  • Florian Messner and Christian Sternagel. TermComp 2018 Participant: TTT2
  • Dieter Hofbauer. MultumNonMulta at TermComp 2018
  • Raúl Gutiérrez and Salvador Lucas. MU-TERM at the 2018 Termination Competition
  • Jesús J. Doménech and Samir Genaim. iRankFinder
19:15-21:30 Workshops dinner at Magdalen College (FLoC)

Workshops dinner at Magdalen College. Drinks reception from 7.15pm, to be seated by 7:45 (pre-booking via FLoC registration system required; guests welcome).